You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by ko...@apache.org on 2023/12/10 12:19:51 UTC
(arrow) branch main updated: GH-38977: [C++] Fix spelling (#38978)
This is an automated email from the ASF dual-hosted git repository.
kou pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow.git
The following commit(s) were added to refs/heads/main by this push:
new 539f31c83c GH-38977: [C++] Fix spelling (#38978)
539f31c83c is described below
commit 539f31c83ca0b357a0ec64099c8d7d1493b58d15
Author: Josh Soref <21...@users.noreply.github.com>
AuthorDate: Sun Dec 10 07:19:44 2023 -0500
GH-38977: [C++] Fix spelling (#38978)
### Rationale for this change
### What changes are included in this PR?
### Are these changes tested?
### Are there any user-facing changes?
* Closes: #38977
Authored-by: Josh Soref <21...@users.noreply.github.com>
Signed-off-by: Sutou Kouhei <ko...@clear-code.com>
---
cpp/CHANGELOG_PARQUET.md | 20 ++++++++++----------
cpp/CMakeLists.txt | 2 +-
cpp/CMakePresets.json | 2 +-
cpp/apidoc/Doxyfile | 6 +++---
cpp/build-support/cpplint.py | 16 ++++++++--------
cpp/build-support/iwyu/mappings/boost-all.imp | 2 +-
cpp/cmake_modules/SetupCxxFlags.cmake | 8 ++++----
cpp/cmake_modules/ThirdpartyToolchain.cmake | 8 ++++----
cpp/cmake_modules/Usevcpkg.cmake | 2 +-
.../arrow/execution_plan_documentation_examples.cc | 2 +-
cpp/examples/arrow/rapidjson_row_converter.cc | 4 ++--
11 files changed, 36 insertions(+), 36 deletions(-)
diff --git a/cpp/CHANGELOG_PARQUET.md b/cpp/CHANGELOG_PARQUET.md
index 06a09c20f0..68aa8386b5 100644
--- a/cpp/CHANGELOG_PARQUET.md
+++ b/cpp/CHANGELOG_PARQUET.md
@@ -4,7 +4,7 @@ Parquet C++ 1.5.0
* [PARQUET-979] - [C++] Limit size of min, max or disable stats for long binary types
* [PARQUET-1071] - [C++] parquet::arrow::FileWriter::Close is not idempotent
* [PARQUET-1349] - [C++] PARQUET_RPATH_ORIGIN is not picked by the build
- * [PARQUET-1334] - [C++] memory_map parameter seems missleading in parquet file opener
+ * [PARQUET-1334] - [C++] memory_map parameter seems misleading in parquet file opener
* [PARQUET-1333] - [C++] Reading of files with dictionary size 0 fails on Windows with bad_alloc
* [PARQUET-1283] - [C++] FormatStatValue appends trailing space to string and int96
* [PARQUET-1270] - [C++] Executable tools do not get installed
@@ -13,7 +13,7 @@ Parquet C++ 1.5.0
* [PARQUET-1255] - [C++] Exceptions thrown in some tests
* [PARQUET-1358] - [C++] index_page_offset should be unset as it is not supported.
* [PARQUET-1357] - [C++] FormatStatValue truncates binary statistics on zero character
- * [PARQUET-1319] - [C++] Pass BISON_EXECUTABLE to Thrift EP for MacOS
+ * [PARQUET-1319] - [C++] Pass BISON_EXECUTABLE to Thrift EP for macOS
* [PARQUET-1313] - [C++] Compilation failure with VS2017
* [PARQUET-1315] - [C++] ColumnChunkMetaData.has_dictionary_page() should return bool, not int64_t
* [PARQUET-1307] - [C++] memory-test fails with latest Arrow
@@ -28,7 +28,7 @@ Parquet C++ 1.5.0
* [PARQUET-1346] - [C++] Protect against null values data in empty Arrow array
* [PARQUET-1340] - [C++] Fix Travis Ci valgrind errors related to std::random_device
* [PARQUET-1323] - [C++] Fix compiler warnings with clang-6.0
- * [PARQUET-1279] - Use ASSERT_NO_FATAIL_FAILURE in C++ unit tests
+ * [PARQUET-1279] - Use ASSERT_NO_FATAL_FAILURE in C++ unit tests
* [PARQUET-1262] - [C++] Use the same BOOST_ROOT and Boost_NAMESPACE for Thrift
* [PARQUET-1267] - replace "unsafe" std::equal by std::memcmp
* [PARQUET-1360] - [C++] Minor API + style changes follow up to PARQUET-1348
@@ -89,7 +89,7 @@ Parquet C++ 1.4.0
## New Feature
* [PARQUET-1095] - [C++] Read and write Arrow decimal values
- * [PARQUET-970] - Add Add Lz4 and Zstd compression codecs
+ * [PARQUET-970] - Add Lz4 and Zstd compression codecs
## Task
* [PARQUET-1221] - [C++] Extend release README
@@ -233,10 +233,10 @@ Parquet C++ 1.1.0
* [PARQUET-977] - Improve MSVC build
* [PARQUET-957] - [C++] Add optional $PARQUET_BUILD_TOOLCHAIN environment variable option for configuring build environment
* [PARQUET-961] - [C++] Strip debug symbols from libparquet libraries in release builds by default
- * [PARQUET-954] - C++: Use Brolti 0.6 release
+ * [PARQUET-954] - C++: Use Brotli 0.6 release
* [PARQUET-953] - [C++] Change arrow::FileWriter API to be initialized from a Schema, and provide for writing multiple tables
* [PARQUET-941] - [C++] Stop needless Boost static library detection for CentOS 7 support
- * [PARQUET-942] - [C++] Fix wrong variabe use in FindSnappy
+ * [PARQUET-942] - [C++] Fix wrong variable use in FindSnappy
* [PARQUET-939] - [C++] Support Thrift_HOME CMake variable like FindSnappy does as Snappy_HOME
* [PARQUET-940] - [C++] Fix Arrow library path detection
* [PARQUET-937] - [C++] Support CMake < 3.4 again for Arrow detection
@@ -278,7 +278,7 @@ Parquet C++ 1.0.0
* [PARQUET-614] - C++: Remove unneeded LZ4-related code
* [PARQUET-604] - Install writer.h headers
* [PARQUET-621] - C++: Uninitialised DecimalMetadata is read
- * [PARQUET-620] - C++: Duplicate calls to ParquetFileWriter::Close cause duplicate metdata writes
+ * [PARQUET-620] - C++: Duplicate calls to ParquetFileWriter::Close cause duplicate metadata writes
* [PARQUET-599] - ColumnWriter::RleEncodeLevels' size estimation might be wrong
* [PARQUET-617] - C++: Enable conda build to work on systems with non-default C++ toolchains
* [PARQUET-627] - Ensure that thrift headers are generated before source compilation
@@ -339,7 +339,7 @@ Parquet C++ 1.0.0
* [PARQUET-626] - Fix builds due to unavailable llvm.org apt mirror
* [PARQUET-629] - RowGroupSerializer should only close itself once
* [PARQUET-472] - Clean up InputStream ownership semantics in ColumnReader
- * [PARQUET-739] - Rle-decoding uses static buffer that is shared accross threads
+ * [PARQUET-739] - Rle-decoding uses static buffer that is shared across threads
* [PARQUET-561] - ParquetFileReader::Contents PIMPL missing a virtual destructor
* [PARQUET-892] - [C++] Clean up link library targets in CMake files
* [PARQUET-454] - Address inconsistencies in boolean decoding
@@ -401,12 +401,12 @@ Parquet C++ 1.0.0
* [PARQUET-653] - [C++] Re-enable -static-libstdc++ in dev artifact builds
* [PARQUET-763] - C++: Expose ParquetFileReader through Arrow reader
* [PARQUET-857] - [C++] Flatten parquet/encodings directory
- * [PARQUET-862] - Provide defaut cache size values if CPU info probing is not available
+ * [PARQUET-862] - Provide default cache size values if CPU info probing is not available
* [PARQUET-689] - C++: Compress DataPages eagerly
* [PARQUET-874] - [C++] Use default memory allocator from Arrow
* [PARQUET-267] - Detach thirdparty code from build configuration.
* [PARQUET-418] - Add a utility to print contents of a Parquet file to stdout
- * [PARQUET-519] - Disable compiler warning supressions and fix all DEBUG build warnings
+ * [PARQUET-519] - Disable compiler warning suppressions and fix all DEBUG build warnings
* [PARQUET-447] - Add Debug and Release build types and associated compiler flags
* [PARQUET-868] - C++: Build snappy with optimizations
* [PARQUET-894] - Fix compilation warning
diff --git a/cpp/CMakeLists.txt b/cpp/CMakeLists.txt
index 9f17350b25..d26e06a146 100644
--- a/cpp/CMakeLists.txt
+++ b/cpp/CMakeLists.txt
@@ -75,7 +75,7 @@ set(ARROW_VERSION "15.0.0-SNAPSHOT")
string(REGEX MATCH "^[0-9]+\\.[0-9]+\\.[0-9]+" ARROW_BASE_VERSION "${ARROW_VERSION}")
-# if no build build type is specified, default to release builds
+# if no build type is specified, default to release builds
if(NOT DEFINED CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE
Release
diff --git a/cpp/CMakePresets.json b/cpp/CMakePresets.json
index a15b204c39..9d99b3b2a7 100644
--- a/cpp/CMakePresets.json
+++ b/cpp/CMakePresets.json
@@ -428,7 +428,7 @@
"base-benchmarks",
"features-maximal"
],
- "displayName": "Benchmarking build with with everything enabled",
+ "displayName": "Benchmarking build with everything enabled",
"cacheVariables": {}
},
{
diff --git a/cpp/apidoc/Doxyfile b/cpp/apidoc/Doxyfile
index baa3b41e69..e19c933cd4 100644
--- a/cpp/apidoc/Doxyfile
+++ b/cpp/apidoc/Doxyfile
@@ -239,7 +239,7 @@ QT_AUTOBRIEF = NO
# tag to YES if you prefer the old behavior instead.
#
# Note that setting this tag to YES also means that rational rose comments are
-# not recognized any more.
+# not recognized anymore.
# The default value is: NO.
MULTILINE_CPP_IS_BRIEF = NO
@@ -569,7 +569,7 @@ INTERNAL_DOCS = NO
# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
# names in lower-case letters. If set to YES, upper-case letters are also
# allowed. This is useful if you have classes or files whose names only differ
-# in case and if your file system supports case sensitive file names. Windows
+# in case and if your file system supports case-sensitive file names. Windows
# (including Cygwin) ands Mac users are advised to set this option to NO.
# The default value is: system dependent.
@@ -734,7 +734,7 @@ SHOW_NAMESPACES = YES
# The FILE_VERSION_FILTER tag can be used to specify a program or script that
# doxygen should invoke to get the current version for each file (typically from
# the version control system). Doxygen will invoke the program by executing (via
-# popen()) the command command input-file, where command is the value of the
+# popen()) the command input-file, where command is the value of the
# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
# by doxygen. Whatever the program writes to standard output is used as the file
# version. For an example see the documentation.
diff --git a/cpp/build-support/cpplint.py b/cpp/build-support/cpplint.py
index cf1859bb6d..1bceed9a67 100755
--- a/cpp/build-support/cpplint.py
+++ b/cpp/build-support/cpplint.py
@@ -873,7 +873,7 @@ _repository = None
# Files to exclude from linting. This is set by the --exclude flag.
_excludes = None
-# Whether to supress all PrintInfo messages, UNRELATED to --quiet flag
+# Whether to suppress all PrintInfo messages, UNRELATED to --quiet flag
_quiet = False
# The allowed line length of files.
@@ -1001,7 +1001,7 @@ def ParseNolintSuppressions(filename, raw_line, linenum, error):
'Unknown NOLINT error category: %s' % category)
-def ProcessGlobalSuppresions(lines):
+def ProcessGlobalSuppressions(lines):
"""Updates the list of global error suppressions.
Parses any lint directives in the file that have global effect.
@@ -1029,7 +1029,7 @@ def IsErrorSuppressedByNolint(category, linenum):
"""Returns true if the specified error category is suppressed on this line.
Consults the global error_suppressions map populated by
- ParseNolintSuppressions/ProcessGlobalSuppresions/ResetNolintSuppressions.
+ ParseNolintSuppressions/ProcessGlobalSuppressions/ResetNolintSuppressions.
Args:
category: str, the category of the error.
@@ -1271,7 +1271,7 @@ class _CppLintState(object):
self._filters_backup = self.filters[:]
self.counting = 'total' # In what way are we counting errors?
self.errors_by_category = {} # string to int dict storing error counts
- self.quiet = False # Suppress non-error messagess?
+ self.quiet = False # Suppress non-error messages?
# output format:
# "emacs" - format that emacs can parse (default)
@@ -1599,7 +1599,7 @@ class FileInfo(object):
repo = FileInfo(_repository).FullName()
root_dir = project_dir
while os.path.exists(root_dir):
- # allow case insensitive compare on Windows
+ # allow case-insensitive compare on Windows
if os.path.normcase(root_dir) == os.path.normcase(repo):
return os.path.relpath(fullname, root_dir).replace('\\', '/')
one_up_dir = os.path.dirname(root_dir)
@@ -1765,7 +1765,7 @@ _RE_PATTERN_CLEANSE_LINE_C_COMMENTS = re.compile(
def IsCppString(line):
"""Does line terminate so, that the next symbol is in string constant.
- This function does not consider single-line nor multi-line comments.
+ This function does not consider comments at all.
Args:
line: is a partial line of code starting from the 0..n.
@@ -3870,7 +3870,7 @@ def CheckOperatorSpacing(filename, clean_lines, linenum, error):
elif not Match(r'#.*include', line):
# Look for < that is not surrounded by spaces. This is only
# triggered if both sides are missing spaces, even though
- # technically should should flag if at least one side is missing a
+ # technically it should flag if at least one side is missing a
# space. This is done to avoid some false positives with shifts.
match = Match(r'^(.*[^\s<])<[^\s=<,]', line)
if match:
@@ -6495,7 +6495,7 @@ def ProcessFileData(filename, file_extension, lines, error,
ResetNolintSuppressions()
CheckForCopyright(filename, lines, error)
- ProcessGlobalSuppresions(lines)
+ ProcessGlobalSuppressions(lines)
RemoveMultiLineComments(filename, lines, error)
clean_lines = CleansedLines(lines)
diff --git a/cpp/build-support/iwyu/mappings/boost-all.imp b/cpp/build-support/iwyu/mappings/boost-all.imp
index 5427ae2ac5..7c48acaf34 100644
--- a/cpp/build-support/iwyu/mappings/boost-all.imp
+++ b/cpp/build-support/iwyu/mappings/boost-all.imp
@@ -57,7 +57,7 @@
{ include: ["@<boost/function/.*>", private, "<boost/function.hpp>", public ] },
#manually delete $ sed '/workarounds*\.hpp/d' -i boost-all.imp
#also good idea to remove all lines referring to folders above (e.g., sed '/\/format\//d' -i boost-all.imp)
-#programatically include:
+#programmatically include:
{ include: ["<boost/accumulators/numeric/detail/function1.hpp>", private, "<boost/accumulators/numeric/functional.hpp>", public ] },
{ include: ["<boost/accumulators/numeric/detail/function2.hpp>", private, "<boost/accumulators/numeric/functional.hpp>", public ] },
{ include: ["<boost/accumulators/numeric/detail/pod_singleton.hpp>", private, "<boost/accumulators/numeric/functional.hpp>", public ] },
diff --git a/cpp/cmake_modules/SetupCxxFlags.cmake b/cpp/cmake_modules/SetupCxxFlags.cmake
index 8e8f687d06..6940c6befa 100644
--- a/cpp/cmake_modules/SetupCxxFlags.cmake
+++ b/cpp/cmake_modules/SetupCxxFlags.cmake
@@ -73,7 +73,7 @@ if(ARROW_CPU_FLAG STREQUAL "x86")
message(STATUS "Disable AVX512 support on MINGW for now")
else()
# Check for AVX512 support in the compiler.
- set(OLD_CMAKE_REQURED_FLAGS ${CMAKE_REQUIRED_FLAGS})
+ set(OLD_CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS})
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${ARROW_AVX512_FLAG}")
check_cxx_source_compiles("
#ifdef _MSC_VER
@@ -89,7 +89,7 @@ if(ARROW_CPU_FLAG STREQUAL "x86")
return 0;
}"
CXX_SUPPORTS_AVX512)
- set(CMAKE_REQUIRED_FLAGS ${OLD_CMAKE_REQURED_FLAGS})
+ set(CMAKE_REQUIRED_FLAGS ${OLD_CMAKE_REQUIRED_FLAGS})
endif()
endif()
# Runtime SIMD level it can get from compiler and ARROW_RUNTIME_SIMD_LEVEL
@@ -459,7 +459,7 @@ elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang" OR CMAKE_CXX_COMPILER_ID STRE
if(CMAKE_HOST_SYSTEM_VERSION VERSION_LESS 20)
# Avoid C++17 std::get 'not available' issue on macOS 10.13
- # This will be required until atleast R 4.4 is released and
+ # This will be required until at least R 4.4 is released and
# CRAN (hopefully) stops checking on 10.13
string(APPEND CXX_ONLY_FLAGS " -D_LIBCPP_DISABLE_AVAILABILITY")
endif()
@@ -527,7 +527,7 @@ if(ARROW_CPU_FLAG STREQUAL "aarch64")
set(CXX_COMMON_FLAGS "${CXX_COMMON_FLAGS} -msve-vector-bits=${SVE_VECTOR_BITS}")
else()
set(ARROW_HAVE_SVE_SIZELESS ON)
- add_definitions(-DARROW_HAVE_SVE_SIZELSS)
+ add_definitions(-DARROW_HAVE_SVE_SIZELESS)
endif()
endif()
set(CXX_COMMON_FLAGS "${CXX_COMMON_FLAGS} -march=${ARROW_ARMV8_MARCH}")
diff --git a/cpp/cmake_modules/ThirdpartyToolchain.cmake b/cpp/cmake_modules/ThirdpartyToolchain.cmake
index 978f031983..89d046945e 100644
--- a/cpp/cmake_modules/ThirdpartyToolchain.cmake
+++ b/cpp/cmake_modules/ThirdpartyToolchain.cmake
@@ -1328,8 +1328,8 @@ macro(build_snappy)
set(SNAPPY_CMAKE_ARGS
${EP_COMMON_CMAKE_ARGS} -DSNAPPY_BUILD_TESTS=OFF -DSNAPPY_BUILD_BENCHMARKS=OFF
"-DCMAKE_INSTALL_PREFIX=${SNAPPY_PREFIX}")
- # Snappy unconditionaly enables Werror when building with clang this can lead
- # to build failues by way of new compiler warnings. This adds a flag to disable
+ # Snappy unconditionally enables -Werror when building with clang this can lead
+ # to build failures by way of new compiler warnings. This adds a flag to disable
# Werror to the very end of the invocation to override the snappy internal setting.
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
foreach(CONFIG DEBUG MINSIZEREL RELEASE RELWITHDEBINFO)
@@ -4238,7 +4238,7 @@ macro(build_google_cloud_cpp_storage)
target_include_directories(google-cloud-cpp::common BEFORE
INTERFACE "${GOOGLE_CLOUD_CPP_INCLUDE_DIR}")
# Refer to https://github.com/googleapis/google-cloud-cpp/blob/main/google/cloud/google_cloud_cpp_common.cmake
- # (subsitute `main` for the SHA of the version we use)
+ # (substitute `main` for the SHA of the version we use)
# Version 1.39.0 is at a different place (they refactored after):
# https://github.com/googleapis/google-cloud-cpp/blob/29e5af8ca9b26cec62106d189b50549f4dc1c598/google/cloud/CMakeLists.txt#L146-L155
target_link_libraries(google-cloud-cpp::common
@@ -5071,7 +5071,7 @@ if(ARROW_S3)
if(APPLE)
# CoreFoundation's path is hardcoded in the CMake files provided by
- # aws-sdk-cpp to use the MacOSX SDK provided by XCode which makes
+ # aws-sdk-cpp to use the macOS SDK provided by XCode which makes
# XCode a hard dependency. Command Line Tools is often used instead
# of the full XCode suite, so let the linker to find it.
set_target_properties(AWS::aws-c-common
diff --git a/cpp/cmake_modules/Usevcpkg.cmake b/cpp/cmake_modules/Usevcpkg.cmake
index ee2cfbc670..b6192468da 100644
--- a/cpp/cmake_modules/Usevcpkg.cmake
+++ b/cpp/cmake_modules/Usevcpkg.cmake
@@ -20,7 +20,7 @@ message(STATUS "Using vcpkg to find dependencies")
# ----------------------------------------------------------------------
# Define macros
-# macro to list subdirectirectories (non-recursive)
+# macro to list subdirectories (non-recursive)
macro(list_subdirs SUBDIRS DIR)
file(GLOB children_
RELATIVE ${DIR}
diff --git a/cpp/examples/arrow/execution_plan_documentation_examples.cc b/cpp/examples/arrow/execution_plan_documentation_examples.cc
index 00a23be293..b92f5801c1 100644
--- a/cpp/examples/arrow/execution_plan_documentation_examples.cc
+++ b/cpp/examples/arrow/execution_plan_documentation_examples.cc
@@ -342,7 +342,7 @@ arrow::Status TableSourceSinkExample() {
///
/// Source-Filter-Table
/// This example shows how a filter can be used in an execution plan,
-/// to filter data from a source. The output from the exeuction plan
+/// to filter data from a source. The output from the execution plan
/// is collected into a table.
arrow::Status ScanFilterSinkExample() {
ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
diff --git a/cpp/examples/arrow/rapidjson_row_converter.cc b/cpp/examples/arrow/rapidjson_row_converter.cc
index 3907e72121..7448e9d04e 100644
--- a/cpp/examples/arrow/rapidjson_row_converter.cc
+++ b/cpp/examples/arrow/rapidjson_row_converter.cc
@@ -75,7 +75,7 @@ class RowBatchBuilder {
// Default implementation
arrow::Status Visit(const arrow::Array& array) {
return arrow::Status::NotImplemented(
- "Can not convert to json document for array of type ", array.type()->ToString());
+ "Cannot convert to json document for array of type ", array.type()->ToString());
}
// Handles booleans, integers, floats
@@ -346,7 +346,7 @@ class JsonValueConverter {
// Default implementation
arrow::Status Visit(const arrow::DataType& type) {
return arrow::Status::NotImplemented(
- "Can not convert json value to Arrow array of type ", type.ToString());
+ "Cannot convert json value to Arrow array of type ", type.ToString());
}
arrow::Status Visit(const arrow::Int64Type& type) {