You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues-all@impala.apache.org by "Tim Armstrong (JIRA)" <ji...@apache.org> on 2018/10/23 21:54:00 UTC
[jira] [Resolved] (IMPALA-4451) Impala crashes on thread creation
failure when hitting ulimit
[ https://issues.apache.org/jira/browse/IMPALA-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tim Armstrong resolved IMPALA-4451.
-----------------------------------
Resolution: Duplicate
> Impala crashes on thread creation failure when hitting ulimit
> -------------------------------------------------------------
>
> Key: IMPALA-4451
> URL: https://issues.apache.org/jira/browse/IMPALA-4451
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Affects Versions: Impala 2.5.0
> Reporter: Mostafa Mokhtar
> Priority: Critical
> Labels: supportability
> Attachments: Boost-32K-Threadlimit.patch
>
>
> Impala crashes on thread creation failure when ulimit is reached, ideally Impala should handle thread creation failure, check ulimit and report a meaningful error.
> Stack
> {code}
> #0 0x0000003a57832625 in raise () from /lib64/libc.so.6
> #1 0x0000003a57833e05 in abort () from /lib64/libc.so.6
> #2 0x00007ff3e13d800d in __gnu_cxx::__verbose_terminate_handler() () from /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.234/lib/impala/lib/libstdc++.so.6
> #3 0x00007ff3e13d60e6 in ?? () from /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.234/lib/impala/lib/libstdc++.so.6
> #4 0x00007ff3e13d6131 in std::terminate() () from /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.234/lib/impala/lib/libstdc++.so.6
> #5 0x00007ff3e13d6348 in __cxa_throw () from /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.234/lib/impala/lib/libstdc++.so.6
> #6 0x0000000000812acd in boost::throw_exception<boost::thread_resource_error> (e=...) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/throw_exception.hpp:69
> #7 0x0000000000bc72c0 in start_thread (this=0x7fe55f2e1090, f=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/thread/detail/thread.hpp:181
> #8 boost::thread::thread<void (*)(const std::basic_string<char>&, const std::basic_string<char>&, boost::function<void()>, impala::Promise<long int>*), std::basic_string<char>, std::basic_string<char>, boost::function<void()>, impala::Promise<long int>*>(void (*)(const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, boost::function<void()>, impala::Promise<long> *), std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, boost::function<void()>, impala::Promise<long> *) ( this=0x7fe55f2e1090, f=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/thread/detail/thread.hpp:412
> #9 0x0000000000bc3f65 in impala::Thread::StartThread (this=0x7fe9b0a8d000, functor=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/util/thread.cc:280
> #10 0x00000000009e8413 in impala::Thread::Thread<boost::_bi::bind_t<void, boost::_mfi::mf2<void, impala::ThriftThread, boost::shared_ptr<apache::thrift::concurrency::Runnable>, impala::Promise<unsigned long>*>, boost::_bi::list3<boost::_bi::value<impala::ThriftThread*>, boost::_bi::value<boost::shared_ptr<apache::thrift::concurrency::Runnable> >, boost::_bi::value<impala::Promise<unsigned long>*> > > > (this=0x7fe9b0a8d000, category=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/util/thread.h:65
> #11 0x00000000009e7382 in impala::ThriftThread::start (this=0xc246200) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/rpc/thrift-thread.cc:33
> #12 0x00000000009e9179 in apache::thrift::server::TAcceptQueueServer::SetupConnection (this=0xb021b10, client=...) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/rpc/TAcceptQueueServer.cpp:173
> #13 0x00000000009e967d in operator() (function_obj_ptr=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/rpc/TAcceptQueueServer.cpp:218
> #14 boost::detail::function::void_function_obj_invoker2<apache::thrift::server::TAcceptQueueServer::serve()::<lambda(int, const boost::shared_ptr<apache::thrift::transport::TTransport>&)>, void, int, const boost::shared_ptr<apache::thrift::transport::TTransport>&>::invoke(boost::detail::function::function_buffer &, int, const boost::shared_ptr<apache::thrift::transport::TTransport> &) (function_obj_ptr=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/function/function_template.hpp:153
> #15 0x00000000009ebfc2 in operator() (this=0x7ff36e8a0740, thread_id=0) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/function/function_template.hpp:767
> #16 impala::ThreadPool<boost::shared_ptr<apache::thrift::transport::TTransport> >::WorkerThread ( this=0x7ff36e8a0740, thread_id=0) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/util/thread-pool.h:125
> #17 0x0000000000bc5d29 in operator() (name=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/function/function_template.hpp:767
> #18 impala::Thread::SuperviseThread (name=Unhandled dwarf expression opcode 0xf3) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/be/src/util/thread.cc:317
> #19 0x0000000000bc6704 in operator()<void (*)(const std::basic_string<char>&, const std::basic_string<char>&, boost::function<void()>, impala::Promise<long int>*), boost::_bi::list0> (this=0xc94fa00) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/bind/bind.hpp:457
> #20 operator() (this=0xc94fa00) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/bind/bind_template.hpp:20
> #21 boost::detail::thread_data<boost::_bi::bind_t<void, void (*)(const std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, const std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, boost::function<void()>, impala::Promise<long int>*), boost::_bi::list4<boost::_bi::value<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, boost::_bi::value<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, boost::_bi::value<boost::function<void()> >, boost::_bi::value<impala::Promise<long int>*> > > >::run(void) (this=0xc94fa00) at /data/jenkins/workspace/impala-private-build-binaries/repos/Impala/toolchain/boost-1.57.0/include/boost/thr---Type <return> to continue, or q <return> to quit---ead/detail/thread.hpp:116
> #22 0x0000000000e0632a in thread_proxy ()
> #23 0x0000003a57c07aa1 in start_thread () from /lib64/libpthread.so.0
> #24 0x0000003a578e893d in clone () from /lib64/libc.so.6
> {code}
> Also this causes running queries to fail with seemingly incorrect errors
> {code}
> I1108 21:00:14.136420 131331 thrift-util.cc:111] TAcceptQueueServer exception: St9bad_alloc: std::bad_alloc
> I1108 21:00:14.279989 133953 thrift-util.cc:111] TAcceptQueueServer exception: St9bad_alloc: std::bad_alloc
> I1108 21:00:14.281364 134125 thrift-util.cc:111] TAcceptQueueServer exception: St9bad_alloc: std::bad_alloc
> I1108 21:00:14.282368 134180 thrift-util.cc:111] TAcceptQueueServer exception: St9bad_alloc: std::bad_alloc
> I1108 21:00:14.317803 134760 status.cc:47] Memory limit exceeded
> @ 0x83410a impala::Status::Status()
> @ 0x834268 impala::Status::MemLimitExceeded()
> @ 0xa0c23c impala::MemTracker::MemLimitExceeded()
> @ 0xc67c49 impala::BaseScalarColumnReader::ReadDataPage()
> @ 0xc68228 impala::BaseScalarColumnReader::NextPage()
> @ 0xc76198 impala::ScalarColumnReader<>::ReadNonRepeatedValueBatch()
> @ 0xc4e333 impala::HdfsParquetScanner::AssembleRows()
> @ 0xc50f28 impala::HdfsParquetScanner::GetNextInternal()
> @ 0xc4dd72 impala::HdfsParquetScanner::ProcessSplit()
> @ 0xc2a026 impala::HdfsScanNode::ProcessSplit()
> @ 0xc2c163 impala::HdfsScanNode::ScannerThread()
> @ 0xbc5d29 impala::Thread::SuperviseThread()
> @ 0xbc6704 boost::detail::thread_data<>::run()
> @ 0xe0632a thread_proxy
> @ 0x3a57c07aa1 (unknown)
> @ 0x3a578e893d (unknown)
> I1108 21:00:14.323889 134760 runtime-state.cc:208] Error from query 67449109ff600bb2:ea366be700000000: Memory Limit Exceeded by fragment: 67449109ff600bb2:2442
> HDFS_SCAN_NODE (id=10) could not allocate 64.01 KB without exceeding limit.
> Query(67449109ff600bb2:ea366be700000000): Total=4.98 GB Peak=5.02 GB
> Fragment 67449109ff600bb2:ea366be700000000: Total=16.00 KB Peak=16.00 KB
> AGGREGATION_NODE (id=76): Total=8.00 KB Peak=8.00 KB
> Exprs: Total=4.00 KB Peak=4.00 KB
> EXCHANGE_NODE (id=75): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> PLAN_ROOT_SINK: Total=0 Peak=0
> Block Manager: Limit=80.00 GB Total=4.38 GB Peak=4.38 GB
> Fragment 67449109ff600bb2:5949: Total=4.52 GB Peak=4.55 GB
> Runtime Filter Bank: Total=40.00 MB Peak=40.00 MB
> AGGREGATION_NODE (id=49): Total=8.00 KB Peak=8.00 KB
> Exprs: Total=4.00 KB Peak=4.00 KB
> HASH_JOIN_NODE (id=48): Total=244.25 KB Peak=244.25 KB
> HASH_JOIN_NODE (id=47): Total=236.25 KB Peak=236.25 KB
> HASH_JOIN_NODE (id=46): Total=228.25 KB Peak=228.25 KB
> HASH_JOIN_NODE (id=45): Total=220.25 KB Peak=220.25 KB
> HASH_JOIN_NODE (id=44): Total=212.25 KB Peak=212.25 KB
> HASH_JOIN_NODE (id=43): Total=204.25 KB Peak=204.25 KB
> HASH_JOIN_NODE (id=42): Total=196.25 KB Peak=196.25 KB
> HASH_JOIN_NODE (id=41): Total=188.25 KB Peak=188.25 KB
> HASH_JOIN_NODE (id=40): Total=180.25 KB Peak=180.25 KB
> HASH_JOIN_NODE (id=39): Total=172.25 KB Peak=172.25 KB
> HASH_JOIN_NODE (id=38): Total=164.25 KB Peak=164.25 KB
> HASH_JOIN_NODE (id=37): Total=156.25 KB Peak=156.25 KB
> HASH_JOIN_NODE (id=36): Total=148.25 KB Peak=148.25 KB
> HASH_JOIN_NODE (id=35): Total=140.25 KB Peak=140.25 KB
> HASH_JOIN_NODE (id=34): Total=137.14 MB Peak=137.16 MB
> HASH_JOIN_NODE (id=33): Total=265.13 MB Peak=265.15 MB
> HASH_JOIN_NODE (id=32): Total=521.11 MB Peak=521.12 MB
> HASH_JOIN_NODE (id=31): Total=521.11 MB Peak=521.11 MB
> HASH_JOIN_NODE (id=30): Total=521.10 MB Peak=521.11 MB
> HASH_JOIN_NODE (id=29): Total=521.09 MB Peak=521.10 MB
> HASH_JOIN_NODE (id=28): Total=521.08 MB Peak=521.09 MB
> HASH_JOIN_NODE (id=27): Total=521.07 MB Peak=521.08 MB
> HASH_JOIN_NODE (id=26): Total=521.07 MB Peak=521.07 MB
> HASH_JOIN_NODE (id=25): Total=521.07 MB Peak=521.07 MB
> EXCHANGE_NODE (id=50): Total=0 Peak=0
> DataStreamRecvr: Total=16.40 MB Peak=16.40 MB
> EXCHANGE_NODE (id=51): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=2.40 MB
> EXCHANGE_NODE (id=52): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.39 MB
> EXCHANGE_NODE (id=53): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.14 MB
> EXCHANGE_NODE (id=54): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.42 MB
> EXCHANGE_NODE (id=55): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.41 MB
> EXCHANGE_NODE (id=56): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.41 MB
> EXCHANGE_NODE (id=57): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.42 MB
> EXCHANGE_NODE (id=58): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.43 MB
> EXCHANGE_NODE (id=59): Total=0 Peak=0
> DataStreamRecvr: Total=5.20 KB Peak=16.41 MB
> EXCHANGE_NODE (id=60): Total=0 Peak=0
> DataStreamRecvr: Total=787.09 KB Peak=16.42 MB
> EXCHANGE_NODE (id=61): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=62): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=63): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=64): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=65): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=66): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=67): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=68): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=69): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=70): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=71): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=72): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=73): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=74): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> DataStreamSender (dst_id=75): Total=16.00 KB Peak=16.00 KB
> Fragment 67449109ff600bb2:e8: Total=189.75 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=0): Total=158.33 MB Peak=158.46 MB
> DataStreamSender (dst_id=50): Total=31.40 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:471: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=1): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=51): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:7fa: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=2): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=52): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:b83: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=3): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=53): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:f0c: Total=6.66 MB Peak=189.82 MB
> HDFS_SCAN_NODE (id=4): Total=0 Peak=158.48 MB
> DataStreamSender (dst_id=54): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:1295: Total=6.66 MB Peak=189.82 MB
> HDFS_SCAN_NODE (id=5): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=55): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:161e: Total=6.66 MB Peak=188.85 MB
> HDFS_SCAN_NODE (id=6): Total=0 Peak=157.42 MB
> DataStreamSender (dst_id=56): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:19a7: Total=6.66 MB Peak=180.94 MB
> HDFS_SCAN_NODE (id=7): Total=0 Peak=149.52 MB
> DataStreamSender (dst_id=57): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:1d30: Total=6.66 MB Peak=180.82 MB
> HDFS_SCAN_NODE (id=8): Total=0 Peak=149.40 MB
> DataStreamSender (dst_id=58): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:20b9: Total=46.64 MB Peak=180.94 MB
> HDFS_SCAN_NODE (id=9): Total=19.93 MB Peak=149.52 MB
> DataStreamSender (dst_id=59): Total=26.68 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:2442: Total=179.56 MB Peak=188.88 MB
> HDFS_SCAN_NODE (id=10): Total=157.21 MB Peak=157.46 MB
> DataStreamSender (dst_id=60): Total=22.32 MB Peak=31.40 MB
> I1108 21:00:14.324087 131105 status.cc:47] Memory limit exceeded
> @ 0x83410a impala::Status::Status()
> @ 0x834268 impala::Status::MemLimitExceeded()
> @ 0xa0c23c impala::MemTracker::MemLimitExceeded()
> @ 0xc67c49 impala::BaseScalarColumnReader::ReadDataPage()
> @ 0xc68228 impala::BaseScalarColumnReader::NextPage()
> @ 0xc76198 impala::ScalarColumnReader<>::ReadNonRepeatedValueBatch()
> @ 0xc4e333 impala::HdfsParquetScanner::AssembleRows()
> @ 0xc50f28 impala::HdfsParquetScanner::GetNextInternal()
> @ 0xc4dd72 impala::HdfsParquetScanner::ProcessSplit()
> @ 0xc2a026 impala::HdfsScanNode::ProcessSplit()
> @ 0xc2c163 impala::HdfsScanNode::ScannerThread()
> @ 0xbc5d29 impala::Thread::SuperviseThread()
> @ 0xbc6704 boost::detail::thread_data<>::run()
> @ 0xe0632a thread_proxy
> @ 0x3a57c07aa1 (unknown)
> @ 0x3a578e893d (unknown)
> I1108 21:00:14.327703 134760 hdfs-scan-node.cc:541] Scan node (id=10) ran into a parse error for scan range hdfs://ns1/scale/tpcds_30000_decimal_parquet/store_returns/sr_returned_date_sk=2452285/b748603c4a063884-6732f825e8cab81a_571258249_data.1.parq(96397514:102400).
> I1108 21:00:14.331266 131105 runtime-state.cc:208] Error from query 67449109ff600bb2:ea366be700000000: Memory Limit Exceeded by fragment: 67449109ff600bb2:20b9
> HDFS_SCAN_NODE (id=9) could not allocate 64.01 KB without exceeding limit.
> Query(67449109ff600bb2:ea366be700000000): Total=4.98 GB Peak=5.02 GB
> Fragment 67449109ff600bb2:ea366be700000000: Total=16.00 KB Peak=16.00 KB
> AGGREGATION_NODE (id=76): Total=8.00 KB Peak=8.00 KB
> Exprs: Total=4.00 KB Peak=4.00 KB
> EXCHANGE_NODE (id=75): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> PLAN_ROOT_SINK: Total=0 Peak=0
> Block Manager: Limit=80.00 GB Total=4.38 GB Peak=4.38 GB
> Fragment 67449109ff600bb2:5949: Total=4.52 GB Peak=4.55 GB
> Runtime Filter Bank: Total=40.00 MB Peak=40.00 MB
> AGGREGATION_NODE (id=49): Total=8.00 KB Peak=8.00 KB
> Exprs: Total=4.00 KB Peak=4.00 KB
> HASH_JOIN_NODE (id=48): Total=244.25 KB Peak=244.25 KB
> HASH_JOIN_NODE (id=47): Total=236.25 KB Peak=236.25 KB
> HASH_JOIN_NODE (id=46): Total=228.25 KB Peak=228.25 KB
> HASH_JOIN_NODE (id=45): Total=220.25 KB Peak=220.25 KB
> HASH_JOIN_NODE (id=44): Total=212.25 KB Peak=212.25 KB
> HASH_JOIN_NODE (id=43): Total=204.25 KB Peak=204.25 KB
> HASH_JOIN_NODE (id=42): Total=196.25 KB Peak=196.25 KB
> HASH_JOIN_NODE (id=41): Total=188.25 KB Peak=188.25 KB
> HASH_JOIN_NODE (id=40): Total=180.25 KB Peak=180.25 KB
> HASH_JOIN_NODE (id=39): Total=172.25 KB Peak=172.25 KB
> HASH_JOIN_NODE (id=38): Total=164.25 KB Peak=164.25 KB
> HASH_JOIN_NODE (id=37): Total=156.25 KB Peak=156.25 KB
> HASH_JOIN_NODE (id=36): Total=148.25 KB Peak=148.25 KB
> HASH_JOIN_NODE (id=35): Total=140.25 KB Peak=140.25 KB
> HASH_JOIN_NODE (id=34): Total=137.14 MB Peak=137.16 MB
> HASH_JOIN_NODE (id=33): Total=265.13 MB Peak=265.15 MB
> HASH_JOIN_NODE (id=32): Total=521.11 MB Peak=521.12 MB
> HASH_JOIN_NODE (id=31): Total=521.11 MB Peak=521.11 MB
> HASH_JOIN_NODE (id=30): Total=521.10 MB Peak=521.11 MB
> HASH_JOIN_NODE (id=29): Total=521.09 MB Peak=521.10 MB
> HASH_JOIN_NODE (id=28): Total=521.08 MB Peak=521.09 MB
> HASH_JOIN_NODE (id=27): Total=521.07 MB Peak=521.08 MB
> HASH_JOIN_NODE (id=26): Total=521.07 MB Peak=521.07 MB
> HASH_JOIN_NODE (id=25): Total=521.07 MB Peak=521.07 MB
> EXCHANGE_NODE (id=50): Total=0 Peak=0
> DataStreamRecvr: Total=16.40 MB Peak=16.40 MB
> EXCHANGE_NODE (id=51): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=2.40 MB
> EXCHANGE_NODE (id=52): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.39 MB
> EXCHANGE_NODE (id=53): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.14 MB
> EXCHANGE_NODE (id=54): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.42 MB
> EXCHANGE_NODE (id=55): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.41 MB
> EXCHANGE_NODE (id=56): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.41 MB
> EXCHANGE_NODE (id=57): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.42 MB
> EXCHANGE_NODE (id=58): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=16.43 MB
> EXCHANGE_NODE (id=59): Total=0 Peak=0
> DataStreamRecvr: Total=5.20 KB Peak=16.41 MB
> EXCHANGE_NODE (id=60): Total=0 Peak=0
> DataStreamRecvr: Total=787.09 KB Peak=16.42 MB
> EXCHANGE_NODE (id=61): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=62): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=63): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=64): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=65): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=66): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=67): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=68): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=69): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=70): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=71): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=72): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=73): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> EXCHANGE_NODE (id=74): Total=0 Peak=0
> DataStreamRecvr: Total=0 Peak=0
> DataStreamSender (dst_id=75): Total=16.00 KB Peak=16.00 KB
> Fragment 67449109ff600bb2:e8: Total=189.75 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=0): Total=158.33 MB Peak=158.46 MB
> DataStreamSender (dst_id=50): Total=31.40 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:471: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=1): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=51): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:7fa: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=2): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=52): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:b83: Total=6.66 MB Peak=189.88 MB
> HDFS_SCAN_NODE (id=3): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=53): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:f0c: Total=6.66 MB Peak=189.82 MB
> HDFS_SCAN_NODE (id=4): Total=0 Peak=158.48 MB
> DataStreamSender (dst_id=54): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:1295: Total=6.66 MB Peak=189.82 MB
> HDFS_SCAN_NODE (id=5): Total=0 Peak=158.46 MB
> DataStreamSender (dst_id=55): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:161e: Total=6.66 MB Peak=188.85 MB
> HDFS_SCAN_NODE (id=6): Total=0 Peak=157.42 MB
> DataStreamSender (dst_id=56): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:19a7: Total=6.66 MB Peak=180.94 MB
> HDFS_SCAN_NODE (id=7): Total=0 Peak=149.52 MB
> DataStreamSender (dst_id=57): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:1d30: Total=6.66 MB Peak=180.82 MB
> HDFS_SCAN_NODE (id=8): Total=0 Peak=149.40 MB
> DataStreamSender (dst_id=58): Total=6.65 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:20b9: Total=46.33 MB Peak=180.94 MB
> HDFS_SCAN_NODE (id=9): Total=19.84 MB Peak=149.52 MB
> DataStreamSender (dst_id=59): Total=26.47 MB Peak=31.40 MB
> Fragment 67449109ff600bb2:2442: Total=179.08 MB Peak=188.88 MB
> HDFS_SCAN_NODE (id=10): Total=157.13 MB Peak=157.46 MB
> DataStreamSender (dst_id=60): Total=21.93 MB Peak=31.40 MB
> I1108 21:00:14.332072 131105 hdfs-scan-node.cc:541] Scan node (id=9) ran into a parse error for scan range hdfs://ns1/scale/tpcds_30000_decimal_parquet/store_returns/sr_returned_date_sk=__HIVE_DEFAULT_PARTITION__/b748603c4a063884-6732f825e8cab7db_1285825239_data.25.parq(265525523:102400).
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscribe@impala.apache.org
For additional commands, e-mail: issues-all-help@impala.apache.org