You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "zeroshade (via GitHub)" <gi...@apache.org> on 2023/02/13 19:40:55 UTC

[GitHub] [arrow] zeroshade opened a new pull request, #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

zeroshade opened a new pull request, #34172:
URL: https://github.com/apache/arrow/pull/34172

   <!--
   Thanks for opening a pull request!
   If this is your first pull request you can find detailed information on how 
   to contribute here:
     * [New Contributor's Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request)
     * [Contributing Overview](https://arrow.apache.org/docs/dev/developers/overview.html)
   
   
   If this is not a [minor PR](https://github.com/apache/arrow/blob/master/CONTRIBUTING.md#Minor-Fixes). Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose
   
   Opening GitHub issues ahead of time contributes to the [Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.) of the Apache Arrow project.
   
   Then could you also rename the pull request title in the following format?
   
       GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}
   
   or
   
       MINOR: [${COMPONENT}] ${SUMMARY}
   
   In the case of PARQUET issues on JIRA the title also supports:
   
       PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}
   
   -->
   
   ### Rationale for this change
   
   Implementing a kernel for computing the "unique" values in an arrow array, primarily for use in solving #33466. 
   
   <!--
    Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.  
   -->
   
   ### What changes are included in this PR?
   Adds a "unique" function to the compute list and helper convenience functions.
   
   <!--
   There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ### Are these changes tested?
   Yes, unit tests are included.
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are they covered by existing tests)?
   -->
   
   ### Are there any user-facing changes?
   Just the new available functions.
   <!--
   If there are user-facing changes then we may require documentation to be updated before approving the PR.
   -->
   
   <!--
   If there are any breaking changes to public APIs, please uncomment the line below and explain which changes are breaking.
   -->
   <!-- **This PR includes breaking changes to public APIs.** -->
   
   <!--
   Please uncomment the line below (and provide explanation) if the changes fix either (a) a security vulnerability, (b) a bug that caused incorrect or invalid data to be produced, or (c) a bug that causes a crash (even when the API contract is upheld). We use this to highlight fixes to issues that may affect users without their knowledge. For this reason, fixing bugs that cause errors don't count, since those are usually obvious.
   -->
   <!-- **This PR contains a "Critical Fix".** -->


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] github-actions[bot] commented on pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #34172:
URL: https://github.com/apache/arrow/pull/34172#issuecomment-1428551616

   * Closes: #34171


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] zeroshade commented on a diff in pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "zeroshade (via GitHub)" <gi...@apache.org>.
zeroshade commented on code in PR #34172:
URL: https://github.com/apache/arrow/pull/34172#discussion_r1106078089


##########
go/arrow/array/util.go:
##########
@@ -272,6 +272,17 @@ func getDictArrayData(mem memory.Allocator, valueType arrow.DataType, memoTable
 			offsets := arrow.Int32Traits.CastFromBytes(buffers[1].Bytes())
 			tbl.CopyOffsetsSubset(startOffset, offsets)
 
+			valuesz := offsets[len(offsets)-1] - offsets[0]
+			buffers[2].Resize(int(valuesz))
+			tbl.CopyValuesSubset(startOffset, buffers[2].Bytes())
+		case arrow.LARGE_BINARY, arrow.LARGE_STRING:

Review Comment:
   Yea, look at lines 294-297 of `vector_hash_test.go` the whole test suite for working with binary/string types gets run with both the LargeBinary and LargeString types also. It's actually how I found this issue in the first place so I could fix it :) haha.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] zeroshade merged pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "zeroshade (via GitHub)" <gi...@apache.org>.
zeroshade merged PR #34172:
URL: https://github.com/apache/arrow/pull/34172


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] ursabot commented on pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "ursabot (via GitHub)" <gi...@apache.org>.
ursabot commented on PR #34172:
URL: https://github.com/apache/arrow/pull/34172#issuecomment-1430625056

   Benchmark runs are scheduled for baseline = 266f16668050d2c901328c11a6da2b31b01ea302 and contender = 90071ccd67fb7a81dacbb1966d881b7705d62b97. 90071ccd67fb7a81dacbb1966d881b7705d62b97 is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
   Conbench compare runs links:
   [Finished :arrow_down:0.0% :arrow_up:0.0%] [ec2-t3-xlarge-us-east-2](https://conbench.ursa.dev/compare/runs/b6414f4e0a5a4aeda59b1039b7430b40...17b1aff1f0be479bafc369c3cc301cdd/)
   [Failed :arrow_down:0.61% :arrow_up:0.0%] [test-mac-arm](https://conbench.ursa.dev/compare/runs/4e4082058e0b4b0187025c7cc0f51b31...11054437904f4b7295eaeaf6dffbde0a/)
   [Finished :arrow_down:0.0% :arrow_up:0.0%] [ursa-i9-9960x](https://conbench.ursa.dev/compare/runs/93a6d571c73c44edbaaac90eed6e8ced...4c1918339fca40aca915312529b050ed/)
   [Finished :arrow_down:0.16% :arrow_up:0.03%] [ursa-thinkcentre-m75q](https://conbench.ursa.dev/compare/runs/fd15f2f4fb9e43cfa4f3a2fd9888872b...0ab3f013427544f5b4e904bbf68ed6ec/)
   Buildkite builds:
   [Finished] [`90071ccd` ec2-t3-xlarge-us-east-2](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ec2-t3-xlarge-us-east-2/builds/2372)
   [Failed] [`90071ccd` test-mac-arm](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-test-mac-arm/builds/2402)
   [Finished] [`90071ccd` ursa-i9-9960x](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ursa-i9-9960x/builds/2370)
   [Finished] [`90071ccd` ursa-thinkcentre-m75q](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ursa-thinkcentre-m75q/builds/2394)
   [Finished] [`266f1666` ec2-t3-xlarge-us-east-2](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ec2-t3-xlarge-us-east-2/builds/2371)
   [Failed] [`266f1666` test-mac-arm](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-test-mac-arm/builds/2401)
   [Finished] [`266f1666` ursa-i9-9960x](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ursa-i9-9960x/builds/2369)
   [Finished] [`266f1666` ursa-thinkcentre-m75q](https://buildkite.com/apache-arrow/arrow-bci-benchmark-on-ursa-thinkcentre-m75q/builds/2393)
   Supported benchmarks:
   ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
   test-mac-arm: Supported benchmark langs: C++, Python, R
   ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
   ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] lidavidm commented on a diff in pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "lidavidm (via GitHub)" <gi...@apache.org>.
lidavidm commented on code in PR #34172:
URL: https://github.com/apache/arrow/pull/34172#discussion_r1105839085


##########
go/arrow/array/util.go:
##########
@@ -272,6 +272,17 @@ func getDictArrayData(mem memory.Allocator, valueType arrow.DataType, memoTable
 			offsets := arrow.Int32Traits.CastFromBytes(buffers[1].Bytes())
 			tbl.CopyOffsetsSubset(startOffset, offsets)
 
+			valuesz := offsets[len(offsets)-1] - offsets[0]
+			buffers[2].Resize(int(valuesz))
+			tbl.CopyValuesSubset(startOffset, buffers[2].Bytes())
+		case arrow.LARGE_BINARY, arrow.LARGE_STRING:

Review Comment:
   Is there a unit test that covers this case?



##########
go/arrow/compute/vector_hash_test.go:
##########
@@ -0,0 +1,540 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build go1.18
+
+package compute_test
+
+import (
+	"context"
+	"strings"
+	"testing"
+
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/compute"
+	"github.com/apache/arrow/go/v12/arrow/compute/internal/exec"
+	"github.com/apache/arrow/go/v12/arrow/decimal128"
+	"github.com/apache/arrow/go/v12/arrow/decimal256"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/stretchr/testify/assert"
+	"github.com/stretchr/testify/require"
+	"github.com/stretchr/testify/suite"
+	"golang.org/x/exp/constraints"
+)
+
+func checkUniqueDict[I exec.IntTypes | exec.UintTypes](t *testing.T, input compute.ArrayLikeDatum, expected arrow.Array) {
+	out, err := compute.Unique(context.TODO(), input)
+	require.NoError(t, err)
+	defer out.Release()
+
+	result := out.(*compute.ArrayDatum).MakeArray().(*array.Dictionary)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	exDict := expected.(*array.Dictionary).Dictionary()
+	resultDict := result.Dictionary()
+
+	require.Truef(t, array.Equal(exDict, resultDict), "wanted: %s\ngot: %s", exDict, resultDict)
+
+	want := exec.GetValues[I](expected.(*array.Dictionary).Indices().Data(), 1)
+	got := exec.GetValues[I](result.Indices().Data(), 1)
+	assert.ElementsMatchf(t, got, want, "wanted: %s\ngot: %s", want, got)
+}
+
+func checkDictionaryUnique(t *testing.T, input compute.ArrayLikeDatum, expected arrow.Array) {
+	require.Truef(t, arrow.TypeEqual(input.Type(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), input.Type())
+
+	switch input.Type().(*arrow.DictionaryType).IndexType.ID() {
+	case arrow.INT8:
+		checkUniqueDict[int8](t, input, expected)
+	case arrow.INT16:
+		checkUniqueDict[int16](t, input, expected)
+	case arrow.INT32:
+		checkUniqueDict[int32](t, input, expected)
+	case arrow.INT64:
+		checkUniqueDict[int64](t, input, expected)
+	case arrow.UINT8:
+		checkUniqueDict[uint8](t, input, expected)
+	case arrow.UINT16:
+		checkUniqueDict[uint16](t, input, expected)
+	case arrow.UINT32:
+		checkUniqueDict[uint32](t, input, expected)
+	case arrow.UINT64:
+		checkUniqueDict[uint64](t, input, expected)
+	}
+}
+
+func checkUniqueFixedWidth[T exec.FixedWidthTypes](t *testing.T, input, expected arrow.Array) {
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+	want := exec.GetValues[T](expected.Data(), 1)
+	got := exec.GetValues[T](expected.Data(), 1)
+
+	assert.ElementsMatchf(t, got, want, "wanted: %s\ngot: %s", want, got)
+}
+
+func checkUniqueVariableWidth[OffsetType int32 | int64](t *testing.T, input, expected arrow.Array) {
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	require.EqualValues(t, expected.Len(), result.Len())
+
+	createSlice := func(v arrow.Array) [][]byte {
+		var (
+			offsets = exec.GetOffsets[OffsetType](v.Data(), 1)
+			data    = v.Data().Buffers()[2].Bytes()
+			out     = make([][]byte, v.Len())
+		)
+
+		for i := 0; i < v.Len(); i++ {
+			out[i] = data[offsets[i]:offsets[i+1]]
+		}
+		return out
+	}
+
+	want := createSlice(expected)
+	got := createSlice(result)
+
+	assert.ElementsMatch(t, want, got)
+}
+
+type ArrowType interface {
+	exec.FixedWidthTypes | string | []byte
+}
+
+type builder[T ArrowType] interface {
+	AppendValues([]T, []bool)
+}
+
+func makeArray[T ArrowType](mem memory.Allocator, dt arrow.DataType, values []T, isValid []bool) arrow.Array {
+	bldr := array.NewBuilder(mem, dt)
+	defer bldr.Release()
+
+	bldr.(builder[T]).AppendValues(values, isValid)
+	return bldr.NewArray()
+}
+
+func checkUniqueFixedSizeBinary(t *testing.T, mem memory.Allocator, dt *arrow.FixedSizeBinaryType, inValues, outValues [][]byte, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	slice := func(v arrow.Array) [][]byte {
+		data := v.Data().Buffers()[1].Bytes()
+		out := make([][]byte, v.Len())
+		for i := range out {
+			out[i] = data[i*dt.ByteWidth : (i+1)*dt.ByteWidth]
+		}
+		return out
+	}
+
+	want := slice(expected)
+	got := slice(result)
+	assert.ElementsMatch(t, want, got)
+}
+
+func checkUniqueFW[T exec.FixedWidthTypes](t *testing.T, mem memory.Allocator, dt arrow.DataType, inValues, outValues []T, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	checkUniqueFixedWidth[T](t, input, expected)
+}
+
+func checkUniqueVW[T string | []byte](t *testing.T, mem memory.Allocator, dt arrow.DataType, inValues, outValues []T, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	switch dt.(arrow.BinaryDataType).Layout().Buffers[1].ByteWidth {
+	case 4:
+		checkUniqueVariableWidth[int32](t, input, expected)
+	case 8:
+		checkUniqueVariableWidth[int64](t, input, expected)
+	}
+}
+
+type PrimitiveHashKernelSuite[T exec.IntTypes | exec.UintTypes | constraints.Float] struct {
+	suite.Suite
+
+	mem *memory.CheckedAllocator
+	dt  arrow.DataType
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) SetupSuite() {
+	ps.dt = exec.GetDataType[T]()
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) SetupTest() {
+	ps.mem = memory.NewCheckedAllocator(memory.DefaultAllocator)
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) TearDownTest() {
+	ps.mem.AssertSize(ps.T(), 0)
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) TestUnique() {
+	ps.Run(ps.dt.String(), func() {
+		if ps.dt.ID() == arrow.DATE64 {
+			checkUniqueFW(ps.T(), ps.mem, ps.dt,
+				[]arrow.Date64{172800000, 864000000, 172800000, 864000000},
+				[]arrow.Date64{172800000, 0, 864000000},
+				[]bool{true, false, true, true}, []bool{true, false, true})
+
+			checkUniqueFW(ps.T(), ps.mem, ps.dt,
+				[]arrow.Date64{172800000, 864000000, 259200000, 864000000},
+				[]arrow.Date64{0, 259200000, 864000000},
+				[]bool{false, false, true, true}, []bool{false, true, true})
+
+			arr, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[86400000, 172800000, null, 259200000, 172800000, null]`))
+			ps.Require().NoError(err)
+			defer arr.Release()
+			input := array.NewSlice(arr, 1, 5)
+			defer input.Release()
+			expected, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[172800000, null, 259200000]`))
+			ps.Require().NoError(err)
+			defer expected.Release()
+			checkUniqueFixedWidth[arrow.Date64](ps.T(), input, expected)
+			return
+		}
+
+		checkUniqueFW(ps.T(), ps.mem, ps.dt,
+			[]T{2, 1, 2, 1}, []T{2, 0, 1},
+			[]bool{true, false, true, true}, []bool{true, false, true})
+		checkUniqueFW(ps.T(), ps.mem, ps.dt,
+			[]T{2, 1, 3, 1}, []T{0, 3, 1},
+			[]bool{false, false, true, true}, []bool{false, true, true})
+
+		arr, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[1, 2, null, 3, 2, null]`))
+		ps.Require().NoError(err)
+		defer arr.Release()
+		input := array.NewSlice(arr, 1, 5)
+		defer input.Release()
+
+		expected, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[2, null, 3]`))
+		ps.Require().NoError(err)
+		defer expected.Release()
+
+		checkUniqueFixedWidth[T](ps.T(), input, expected)
+	})
+}
+
+type BinaryTypeHashKernelSuite[T string | []byte] struct {
+	suite.Suite
+
+	mem *memory.CheckedAllocator
+	dt  arrow.DataType
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) SetupTest() {
+	ps.mem = memory.NewCheckedAllocator(memory.DefaultAllocator)
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) TearDownTest() {
+	ps.mem.AssertSize(ps.T(), 0)
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) TestUnique() {
+	ps.Run(ps.dt.String(), func() {
+		checkUniqueVW(ps.T(), ps.mem, ps.dt,
+			[]T{T("test"), T(""), T("test2"), T("test")}, []T{T("test"), T(""), T("test2")},
+			[]bool{true, false, true, true}, []bool{true, false, true})
+	})
+}
+
+func TestHashKernels(t *testing.T) {
+	suite.Run(t, &PrimitiveHashKernelSuite[int8]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint8]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int16]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint16]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[float32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[float64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[arrow.Date32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[arrow.Date64]{})
+
+	suite.Run(t, &BinaryTypeHashKernelSuite[string]{dt: arrow.BinaryTypes.String})
+	suite.Run(t, &BinaryTypeHashKernelSuite[string]{dt: arrow.BinaryTypes.LargeString})
+	suite.Run(t, &BinaryTypeHashKernelSuite[[]byte]{dt: arrow.BinaryTypes.Binary})
+	suite.Run(t, &BinaryTypeHashKernelSuite[[]byte]{dt: arrow.BinaryTypes.LargeBinary})
+}
+
+func TestUniqueTimeTimestamp(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Time32s,
+		[]arrow.Time32{2, 1, 2, 1}, []arrow.Time32{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Time64ns,
+		[]arrow.Time64{2, 1, 2, 1}, []arrow.Time64{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Timestamp_ns,
+		[]arrow.Timestamp{2, 1, 2, 1}, []arrow.Timestamp{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Duration_ns,
+		[]arrow.Duration{2, 1, 2, 1}, []arrow.Duration{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueFixedSizeBinary(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	dt := &arrow.FixedSizeBinaryType{ByteWidth: 3}
+	checkUniqueFixedSizeBinary(t, mem, dt,
+		[][]byte{[]byte("aaa"), nil, []byte("bbb"), []byte("aaa")},
+		[][]byte{[]byte("aaa"), nil, []byte("bbb")},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueDecimal(t *testing.T) {
+	t.Run("decimal128", func(t *testing.T) {
+		mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+		defer mem.AssertSize(t, 0)
+
+		values := []decimal128.Num{
+			decimal128.FromI64(12),
+			decimal128.FromI64(12),
+			decimal128.FromI64(11),
+			decimal128.FromI64(12)}
+		expected := []decimal128.Num{
+			decimal128.FromI64(12),
+			decimal128.FromI64(0),
+			decimal128.FromI64(11)}
+
+		checkUniqueFW(t, mem, &arrow.Decimal128Type{Precision: 2, Scale: 0},
+			values, expected, []bool{true, false, true, true}, []bool{true, false, true})
+	})
+
+	t.Run("decimal256", func(t *testing.T) {
+		mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+		defer mem.AssertSize(t, 0)
+
+		values := []decimal256.Num{
+			decimal256.FromI64(12),
+			decimal256.FromI64(12),
+			decimal256.FromI64(11),
+			decimal256.FromI64(12)}
+		expected := []decimal256.Num{
+			decimal256.FromI64(12),
+			decimal256.FromI64(0),
+			decimal256.FromI64(11)}
+
+		checkUniqueFW(t, mem, &arrow.Decimal256Type{Precision: 2, Scale: 0},
+			values, expected, []bool{true, false, true, true}, []bool{true, false, true})
+	})
+}
+
+func TestUniqueIntervalMonth(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.MonthInterval,
+		[]arrow.MonthInterval{2, 1, 2, 1}, []arrow.MonthInterval{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.DayTimeInterval,
+		[]arrow.DayTimeInterval{
+			{Days: 2, Milliseconds: 1}, {Days: 3, Milliseconds: 2},
+			{Days: 2, Milliseconds: 1}, {Days: 1, Milliseconds: 2}},
+		[]arrow.DayTimeInterval{{Days: 2, Milliseconds: 1},
+			{Days: 1, Milliseconds: 1}, {Days: 1, Milliseconds: 2}},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.MonthDayNanoInterval,
+		[]arrow.MonthDayNanoInterval{
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 3, Days: 2, Nanoseconds: 1},
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 2, Nanoseconds: 1}},
+		[]arrow.MonthDayNanoInterval{
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 2, Nanoseconds: 1}},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueChunkedArrayInvoke(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	var (
+		values1    = []string{"foo", "bar", "foo"}
+		values2    = []string{"bar", "baz", "quuux", "foo"}
+		dictValues = []string{"foo", "bar", "baz", "quuux"}
+		typ        = arrow.BinaryTypes.String
+		a1         = makeArray(mem, typ, values1, nil)
+		a2         = makeArray(mem, typ, values2, nil)
+		exDict     = makeArray(mem, typ, dictValues, nil)
+	)
+
+	defer a1.Release()
+	defer a2.Release()
+	defer exDict.Release()
+
+	carr := arrow.NewChunked(typ, []arrow.Array{a1, a2})
+	defer carr.Release()
+
+	result, err := compute.Unique(context.TODO(), &compute.ChunkedDatum{Value: carr})
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Equal(t, compute.KindArray, result.Kind())
+	out := result.(*compute.ArrayDatum).MakeArray()
+	defer out.Release()
+
+	assertArraysEqual(t, exDict, out)
+
+	// // dict-encode
+	// var (
+	// 	dictType = &arrow.DictionaryType{
+	// 		IndexType: arrow.PrimitiveTypes.Int32, ValueType: typ}
+	// 	i1 = makeArray(mem, arrow.PrimitiveTypes.Int32, []int32{0, 1, 0}, nil)
+	// 	i2 = makeArray(mem, arrow.PrimitiveTypes.Int32, []int32{1, 2, 3, 0}, nil)
+	// )
+
+	// defer i1.Release()
+	// defer i2.Release()
+
+	// dictArrays := []arrow.Array{
+	// 	array.NewDictionaryArray(dictType, i1, exDict),
+	// 	array.NewDictionaryArray(dictType, i2, exDict),
+	// }
+	// dictCarr := arrow.NewChunked(dictType, dictArrays)
+
+	// defer dictArrays[0].Release()
+	// defer dictArrays[1].Release()
+	// defer dictCarr.Release()

Review Comment:
   Commented code?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] zeroshade commented on a diff in pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "zeroshade (via GitHub)" <gi...@apache.org>.
zeroshade commented on code in PR #34172:
URL: https://github.com/apache/arrow/pull/34172#discussion_r1106078763


##########
go/arrow/compute/vector_hash_test.go:
##########
@@ -0,0 +1,540 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build go1.18
+
+package compute_test
+
+import (
+	"context"
+	"strings"
+	"testing"
+
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/compute"
+	"github.com/apache/arrow/go/v12/arrow/compute/internal/exec"
+	"github.com/apache/arrow/go/v12/arrow/decimal128"
+	"github.com/apache/arrow/go/v12/arrow/decimal256"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/stretchr/testify/assert"
+	"github.com/stretchr/testify/require"
+	"github.com/stretchr/testify/suite"
+	"golang.org/x/exp/constraints"
+)
+
+func checkUniqueDict[I exec.IntTypes | exec.UintTypes](t *testing.T, input compute.ArrayLikeDatum, expected arrow.Array) {
+	out, err := compute.Unique(context.TODO(), input)
+	require.NoError(t, err)
+	defer out.Release()
+
+	result := out.(*compute.ArrayDatum).MakeArray().(*array.Dictionary)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	exDict := expected.(*array.Dictionary).Dictionary()
+	resultDict := result.Dictionary()
+
+	require.Truef(t, array.Equal(exDict, resultDict), "wanted: %s\ngot: %s", exDict, resultDict)
+
+	want := exec.GetValues[I](expected.(*array.Dictionary).Indices().Data(), 1)
+	got := exec.GetValues[I](result.Indices().Data(), 1)
+	assert.ElementsMatchf(t, got, want, "wanted: %s\ngot: %s", want, got)
+}
+
+func checkDictionaryUnique(t *testing.T, input compute.ArrayLikeDatum, expected arrow.Array) {
+	require.Truef(t, arrow.TypeEqual(input.Type(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), input.Type())
+
+	switch input.Type().(*arrow.DictionaryType).IndexType.ID() {
+	case arrow.INT8:
+		checkUniqueDict[int8](t, input, expected)
+	case arrow.INT16:
+		checkUniqueDict[int16](t, input, expected)
+	case arrow.INT32:
+		checkUniqueDict[int32](t, input, expected)
+	case arrow.INT64:
+		checkUniqueDict[int64](t, input, expected)
+	case arrow.UINT8:
+		checkUniqueDict[uint8](t, input, expected)
+	case arrow.UINT16:
+		checkUniqueDict[uint16](t, input, expected)
+	case arrow.UINT32:
+		checkUniqueDict[uint32](t, input, expected)
+	case arrow.UINT64:
+		checkUniqueDict[uint64](t, input, expected)
+	}
+}
+
+func checkUniqueFixedWidth[T exec.FixedWidthTypes](t *testing.T, input, expected arrow.Array) {
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+	want := exec.GetValues[T](expected.Data(), 1)
+	got := exec.GetValues[T](expected.Data(), 1)
+
+	assert.ElementsMatchf(t, got, want, "wanted: %s\ngot: %s", want, got)
+}
+
+func checkUniqueVariableWidth[OffsetType int32 | int64](t *testing.T, input, expected arrow.Array) {
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	require.EqualValues(t, expected.Len(), result.Len())
+
+	createSlice := func(v arrow.Array) [][]byte {
+		var (
+			offsets = exec.GetOffsets[OffsetType](v.Data(), 1)
+			data    = v.Data().Buffers()[2].Bytes()
+			out     = make([][]byte, v.Len())
+		)
+
+		for i := 0; i < v.Len(); i++ {
+			out[i] = data[offsets[i]:offsets[i+1]]
+		}
+		return out
+	}
+
+	want := createSlice(expected)
+	got := createSlice(result)
+
+	assert.ElementsMatch(t, want, got)
+}
+
+type ArrowType interface {
+	exec.FixedWidthTypes | string | []byte
+}
+
+type builder[T ArrowType] interface {
+	AppendValues([]T, []bool)
+}
+
+func makeArray[T ArrowType](mem memory.Allocator, dt arrow.DataType, values []T, isValid []bool) arrow.Array {
+	bldr := array.NewBuilder(mem, dt)
+	defer bldr.Release()
+
+	bldr.(builder[T]).AppendValues(values, isValid)
+	return bldr.NewArray()
+}
+
+func checkUniqueFixedSizeBinary(t *testing.T, mem memory.Allocator, dt *arrow.FixedSizeBinaryType, inValues, outValues [][]byte, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	result, err := compute.UniqueArray(context.TODO(), input)
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Truef(t, arrow.TypeEqual(result.DataType(), expected.DataType()),
+		"wanted: %s\ngot: %s", expected.DataType(), result.DataType())
+
+	slice := func(v arrow.Array) [][]byte {
+		data := v.Data().Buffers()[1].Bytes()
+		out := make([][]byte, v.Len())
+		for i := range out {
+			out[i] = data[i*dt.ByteWidth : (i+1)*dt.ByteWidth]
+		}
+		return out
+	}
+
+	want := slice(expected)
+	got := slice(result)
+	assert.ElementsMatch(t, want, got)
+}
+
+func checkUniqueFW[T exec.FixedWidthTypes](t *testing.T, mem memory.Allocator, dt arrow.DataType, inValues, outValues []T, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	checkUniqueFixedWidth[T](t, input, expected)
+}
+
+func checkUniqueVW[T string | []byte](t *testing.T, mem memory.Allocator, dt arrow.DataType, inValues, outValues []T, inValid, outValid []bool) {
+	input := makeArray(mem, dt, inValues, inValid)
+	defer input.Release()
+	expected := makeArray(mem, dt, outValues, outValid)
+	defer expected.Release()
+
+	switch dt.(arrow.BinaryDataType).Layout().Buffers[1].ByteWidth {
+	case 4:
+		checkUniqueVariableWidth[int32](t, input, expected)
+	case 8:
+		checkUniqueVariableWidth[int64](t, input, expected)
+	}
+}
+
+type PrimitiveHashKernelSuite[T exec.IntTypes | exec.UintTypes | constraints.Float] struct {
+	suite.Suite
+
+	mem *memory.CheckedAllocator
+	dt  arrow.DataType
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) SetupSuite() {
+	ps.dt = exec.GetDataType[T]()
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) SetupTest() {
+	ps.mem = memory.NewCheckedAllocator(memory.DefaultAllocator)
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) TearDownTest() {
+	ps.mem.AssertSize(ps.T(), 0)
+}
+
+func (ps *PrimitiveHashKernelSuite[T]) TestUnique() {
+	ps.Run(ps.dt.String(), func() {
+		if ps.dt.ID() == arrow.DATE64 {
+			checkUniqueFW(ps.T(), ps.mem, ps.dt,
+				[]arrow.Date64{172800000, 864000000, 172800000, 864000000},
+				[]arrow.Date64{172800000, 0, 864000000},
+				[]bool{true, false, true, true}, []bool{true, false, true})
+
+			checkUniqueFW(ps.T(), ps.mem, ps.dt,
+				[]arrow.Date64{172800000, 864000000, 259200000, 864000000},
+				[]arrow.Date64{0, 259200000, 864000000},
+				[]bool{false, false, true, true}, []bool{false, true, true})
+
+			arr, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[86400000, 172800000, null, 259200000, 172800000, null]`))
+			ps.Require().NoError(err)
+			defer arr.Release()
+			input := array.NewSlice(arr, 1, 5)
+			defer input.Release()
+			expected, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[172800000, null, 259200000]`))
+			ps.Require().NoError(err)
+			defer expected.Release()
+			checkUniqueFixedWidth[arrow.Date64](ps.T(), input, expected)
+			return
+		}
+
+		checkUniqueFW(ps.T(), ps.mem, ps.dt,
+			[]T{2, 1, 2, 1}, []T{2, 0, 1},
+			[]bool{true, false, true, true}, []bool{true, false, true})
+		checkUniqueFW(ps.T(), ps.mem, ps.dt,
+			[]T{2, 1, 3, 1}, []T{0, 3, 1},
+			[]bool{false, false, true, true}, []bool{false, true, true})
+
+		arr, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[1, 2, null, 3, 2, null]`))
+		ps.Require().NoError(err)
+		defer arr.Release()
+		input := array.NewSlice(arr, 1, 5)
+		defer input.Release()
+
+		expected, _, err := array.FromJSON(ps.mem, ps.dt, strings.NewReader(`[2, null, 3]`))
+		ps.Require().NoError(err)
+		defer expected.Release()
+
+		checkUniqueFixedWidth[T](ps.T(), input, expected)
+	})
+}
+
+type BinaryTypeHashKernelSuite[T string | []byte] struct {
+	suite.Suite
+
+	mem *memory.CheckedAllocator
+	dt  arrow.DataType
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) SetupTest() {
+	ps.mem = memory.NewCheckedAllocator(memory.DefaultAllocator)
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) TearDownTest() {
+	ps.mem.AssertSize(ps.T(), 0)
+}
+
+func (ps *BinaryTypeHashKernelSuite[T]) TestUnique() {
+	ps.Run(ps.dt.String(), func() {
+		checkUniqueVW(ps.T(), ps.mem, ps.dt,
+			[]T{T("test"), T(""), T("test2"), T("test")}, []T{T("test"), T(""), T("test2")},
+			[]bool{true, false, true, true}, []bool{true, false, true})
+	})
+}
+
+func TestHashKernels(t *testing.T) {
+	suite.Run(t, &PrimitiveHashKernelSuite[int8]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint8]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int16]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint16]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[int64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[uint64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[float32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[float64]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[arrow.Date32]{})
+	suite.Run(t, &PrimitiveHashKernelSuite[arrow.Date64]{})
+
+	suite.Run(t, &BinaryTypeHashKernelSuite[string]{dt: arrow.BinaryTypes.String})
+	suite.Run(t, &BinaryTypeHashKernelSuite[string]{dt: arrow.BinaryTypes.LargeString})
+	suite.Run(t, &BinaryTypeHashKernelSuite[[]byte]{dt: arrow.BinaryTypes.Binary})
+	suite.Run(t, &BinaryTypeHashKernelSuite[[]byte]{dt: arrow.BinaryTypes.LargeBinary})
+}
+
+func TestUniqueTimeTimestamp(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Time32s,
+		[]arrow.Time32{2, 1, 2, 1}, []arrow.Time32{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Time64ns,
+		[]arrow.Time64{2, 1, 2, 1}, []arrow.Time64{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Timestamp_ns,
+		[]arrow.Timestamp{2, 1, 2, 1}, []arrow.Timestamp{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.Duration_ns,
+		[]arrow.Duration{2, 1, 2, 1}, []arrow.Duration{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueFixedSizeBinary(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	dt := &arrow.FixedSizeBinaryType{ByteWidth: 3}
+	checkUniqueFixedSizeBinary(t, mem, dt,
+		[][]byte{[]byte("aaa"), nil, []byte("bbb"), []byte("aaa")},
+		[][]byte{[]byte("aaa"), nil, []byte("bbb")},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueDecimal(t *testing.T) {
+	t.Run("decimal128", func(t *testing.T) {
+		mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+		defer mem.AssertSize(t, 0)
+
+		values := []decimal128.Num{
+			decimal128.FromI64(12),
+			decimal128.FromI64(12),
+			decimal128.FromI64(11),
+			decimal128.FromI64(12)}
+		expected := []decimal128.Num{
+			decimal128.FromI64(12),
+			decimal128.FromI64(0),
+			decimal128.FromI64(11)}
+
+		checkUniqueFW(t, mem, &arrow.Decimal128Type{Precision: 2, Scale: 0},
+			values, expected, []bool{true, false, true, true}, []bool{true, false, true})
+	})
+
+	t.Run("decimal256", func(t *testing.T) {
+		mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+		defer mem.AssertSize(t, 0)
+
+		values := []decimal256.Num{
+			decimal256.FromI64(12),
+			decimal256.FromI64(12),
+			decimal256.FromI64(11),
+			decimal256.FromI64(12)}
+		expected := []decimal256.Num{
+			decimal256.FromI64(12),
+			decimal256.FromI64(0),
+			decimal256.FromI64(11)}
+
+		checkUniqueFW(t, mem, &arrow.Decimal256Type{Precision: 2, Scale: 0},
+			values, expected, []bool{true, false, true, true}, []bool{true, false, true})
+	})
+}
+
+func TestUniqueIntervalMonth(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.MonthInterval,
+		[]arrow.MonthInterval{2, 1, 2, 1}, []arrow.MonthInterval{2, 0, 1},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.DayTimeInterval,
+		[]arrow.DayTimeInterval{
+			{Days: 2, Milliseconds: 1}, {Days: 3, Milliseconds: 2},
+			{Days: 2, Milliseconds: 1}, {Days: 1, Milliseconds: 2}},
+		[]arrow.DayTimeInterval{{Days: 2, Milliseconds: 1},
+			{Days: 1, Milliseconds: 1}, {Days: 1, Milliseconds: 2}},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+
+	checkUniqueFW(t, mem, arrow.FixedWidthTypes.MonthDayNanoInterval,
+		[]arrow.MonthDayNanoInterval{
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 3, Days: 2, Nanoseconds: 1},
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 2, Nanoseconds: 1}},
+		[]arrow.MonthDayNanoInterval{
+			{Months: 2, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 1, Nanoseconds: 1},
+			{Months: 1, Days: 2, Nanoseconds: 1}},
+		[]bool{true, false, true, true}, []bool{true, false, true})
+}
+
+func TestUniqueChunkedArrayInvoke(t *testing.T) {
+	mem := memory.NewCheckedAllocator(memory.DefaultAllocator)
+	defer mem.AssertSize(t, 0)
+
+	var (
+		values1    = []string{"foo", "bar", "foo"}
+		values2    = []string{"bar", "baz", "quuux", "foo"}
+		dictValues = []string{"foo", "bar", "baz", "quuux"}
+		typ        = arrow.BinaryTypes.String
+		a1         = makeArray(mem, typ, values1, nil)
+		a2         = makeArray(mem, typ, values2, nil)
+		exDict     = makeArray(mem, typ, dictValues, nil)
+	)
+
+	defer a1.Release()
+	defer a2.Release()
+	defer exDict.Release()
+
+	carr := arrow.NewChunked(typ, []arrow.Array{a1, a2})
+	defer carr.Release()
+
+	result, err := compute.Unique(context.TODO(), &compute.ChunkedDatum{Value: carr})
+	require.NoError(t, err)
+	defer result.Release()
+
+	require.Equal(t, compute.KindArray, result.Kind())
+	out := result.(*compute.ArrayDatum).MakeArray()
+	defer out.Release()
+
+	assertArraysEqual(t, exDict, out)
+
+	// // dict-encode
+	// var (
+	// 	dictType = &arrow.DictionaryType{
+	// 		IndexType: arrow.PrimitiveTypes.Int32, ValueType: typ}
+	// 	i1 = makeArray(mem, arrow.PrimitiveTypes.Int32, []int32{0, 1, 0}, nil)
+	// 	i2 = makeArray(mem, arrow.PrimitiveTypes.Int32, []int32{1, 2, 3, 0}, nil)
+	// )
+
+	// defer i1.Release()
+	// defer i2.Release()
+
+	// dictArrays := []arrow.Array{
+	// 	array.NewDictionaryArray(dictType, i1, exDict),
+	// 	array.NewDictionaryArray(dictType, i2, exDict),
+	// }
+	// dictCarr := arrow.NewChunked(dictType, dictArrays)
+
+	// defer dictArrays[0].Release()
+	// defer dictArrays[1].Release()
+	// defer dictCarr.Release()

Review Comment:
   gah, this was for when I get around to implementing dict-encode / dict-decode kernels. Ooops, i'll remove it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow] ursabot commented on pull request #34172: GH-34171: [Go][Compute] Implement "Unique" kernel

Posted by "ursabot (via GitHub)" <gi...@apache.org>.
ursabot commented on PR #34172:
URL: https://github.com/apache/arrow/pull/34172#issuecomment-1430625946

   ['Python', 'R'] benchmarks have high level of regressions.
   [test-mac-arm](https://conbench.ursa.dev/compare/runs/4e4082058e0b4b0187025c7cc0f51b31...11054437904f4b7295eaeaf6dffbde0a/)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org