You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "Lucas Martin-King (JIRA)" <ji...@apache.org> on 2012/09/13 08:29:07 UTC
[jira] [Created] (AVRO-1161) Avro-C
Lucas Martin-King created AVRO-1161:
---------------------------------------
Summary: Avro-C
Key: AVRO-1161
URL: https://issues.apache.org/jira/browse/AVRO-1161
Project: Avro
Issue Type: Bug
Components: c
Affects Versions: 1.7.1
Reporter: Lucas Martin-King
There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
Our program is opening and closing lots of log files, and Valgrind gives me this message:
==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
==20151== by 0x4066A3: main (main.cpp:206)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Douglas Creager (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Douglas Creager resolved AVRO-1161.
-----------------------------------
Resolution: Fixed
Fix Version/s: 1.7.2
Merged into SVN
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
> Fix For: 1.7.2
>
> Attachments: 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch, 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch
>
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Lucas Martin-King (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lucas Martin-King updated AVRO-1161:
------------------------------------
Summary: Avro-C: Memory Leak in avro_schema_record() (was: Avro-C)
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Pugachev Maxim (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13456997#comment-13456997 ]
Pugachev Maxim commented on AVRO-1161:
--------------------------------------
My gut feeling: this is a bug in avrocat (and friends like avropipe, avroappend and avromod) and schema should be decrefed.
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Lucas Martin-King (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13456713#comment-13456713 ]
Lucas Martin-King commented on AVRO-1161:
-----------------------------------------
Hi Pugachev,
Using avrocat itself can recreate the condition. What I believe is that there is a bug in Avrocat (and we wrote our internal code after looking at avrocat.c for documentation) where it does not do an avro_schema_decref() after using avro_file_get_writer_schema() to acquire the schema of the input file.
After calling avro_file_reader_close() there is still one reference to the schema, so calling avro_schema_decref() after decreffing the value and iface will release that memory, stopping the leak.
So my question is: Should we be decreffing that schema (the writers_schema member in avro_file_reader_t struct), or should it be decreffed somewhere else?
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Lucas Martin-King (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lucas Martin-King updated AVRO-1161:
------------------------------------
Description:
There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
Our program is opening and closing lots of log files, and Valgrind gives me this message:
==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
==20151== by 0x4066A3: main (main.cpp:206)
A similar report can be reproduced when using avrocat on a single avro file.
was:
There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
Our program is opening and closing lots of log files, and Valgrind gives me this message:
==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
==20151== by 0x4066A3: main (main.cpp:206)
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Douglas Creager (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Douglas Creager updated AVRO-1161:
----------------------------------
Attachment: 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch
Previous patch had a syntax error. Here's a working one.
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
> Attachments: 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch, 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch
>
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Douglas Creager (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Douglas Creager updated AVRO-1161:
----------------------------------
Attachment: 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch
That seems right to me — if you look at the definition of avro_file_reader_get_writer_schema, it calls avro_schema_incref before returning. (That way the schema that's returned can outlast the file object.) So it's the caller's responsibility to decref the result.
Here's a patch that updates all four of the command-line programs to do that.
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
> Attachments: 0001-AVRO-1161.-C-Fix-memory-leak-in-avro-append-cat-mod-.patch
>
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Pugachev Maxim (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455610#comment-13455610 ]
Pugachev Maxim commented on AVRO-1161:
--------------------------------------
Lucas, can you create a test for this problem? I'll try to investigate it too.
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (AVRO-1161) Avro-C: Memory Leak in
avro_schema_record()
Posted by "Lucas Martin-King (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/AVRO-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455528#comment-13455528 ]
Lucas Martin-King commented on AVRO-1161:
-----------------------------------------
Upon further investigation, this is merely a side effect of the fact that somewhere within datafile.c or schema.c, an avro_schema_decref() is not being performed.
If I add an additional avro_schema_decref() to avro_file_reader_close(), then the leak is resolved, as the refcount will be zero the on the 2nd call, thus freeing up the schema. However, this is the band-aid solution, and I'm going to try and find where the decref call is missing from.
> Avro-C: Memory Leak in avro_schema_record()
> -------------------------------------------
>
> Key: AVRO-1161
> URL: https://issues.apache.org/jira/browse/AVRO-1161
> Project: Avro
> Issue Type: Bug
> Components: c
> Affects Versions: 1.7.1
> Reporter: Lucas Martin-King
> Labels: memory_leak
>
> There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
> Our program is opening and closing lots of log files, and Valgrind gives me this message:
> ==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
> ==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
> ==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
> ==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
> ==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
> ==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
> ==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
> ==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
> ==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
> ==20151== by 0x4066A3: main (main.cpp:206)
> A similar report can be reproduced when using avrocat on a single avro file.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira