You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2021/07/14 03:12:29 UTC

[GitHub] [arrow] cyb70289 commented on a change in pull request #10663: ARROW-13253: [FlightRPC][C++] Fix segfault with large messages

cyb70289 commented on a change in pull request #10663:
URL: https://github.com/apache/arrow/pull/10663#discussion_r669232404



##########
File path: cpp/src/arrow/flight/client.cc
##########
@@ -688,11 +688,12 @@ class GrpcStreamWriter : public FlightStreamWriter {
   Status WriteMetadata(std::shared_ptr<Buffer> app_metadata) override {
     FlightPayload payload{};
     payload.app_metadata = app_metadata;
-    if (!internal::WritePayload(payload, writer_->stream().get())) {
+    auto status = internal::WritePayload(payload, writer_->stream().get());
+    if (!status.ok() && status.IsIOError()) {

Review comment:
       Can we remove `!status.ok()` and only check `if (status.IsIOError())` ?

##########
File path: cpp/src/arrow/flight/serialization_internal.cc
##########
@@ -201,9 +193,7 @@ grpc::Status FlightDataSerialize(const FlightPayload& msg, ByteBuffer* out,
   // Write the descriptor if present
   int32_t descriptor_size = 0;
   if (msg.descriptor != nullptr) {
-    if (msg.descriptor->size() > kInt32Max) {
-      return ToGrpcStatus(Status::CapacityError("Descriptor size overflow (>= 2**31)"));
-    }
+    DCHECK_LT(msg.descriptor->size(), kInt32Max);

Review comment:
       `DCHECK_LE` ?

##########
File path: cpp/src/arrow/flight/serialization_internal.cc
##########
@@ -201,9 +193,7 @@ grpc::Status FlightDataSerialize(const FlightPayload& msg, ByteBuffer* out,
   // Write the descriptor if present
   int32_t descriptor_size = 0;
   if (msg.descriptor != nullptr) {
-    if (msg.descriptor->size() > kInt32Max) {
-      return ToGrpcStatus(Status::CapacityError("Descriptor size overflow (>= 2**31)"));
-    }
+    DCHECK_LT(msg.descriptor->size(), kInt32Max);

Review comment:
       It's a bit strange to continue running here, if we know it will fail finally (will it?)
   
   From the jira issue, looks grpc should check and handle our return value decently.
   Maybe open an issue in grpc community? If grpc is willing to change, do we still need to update our code?
   
   For now, I'm not sure if it's appropriate to simply abort here in release build, with some clear error messages and hints to fix it by chunking the data.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org