[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Segmentation fault when nesting several thousand heredocs
From: |
Tom |
Subject: |
Segmentation fault when nesting several thousand heredocs |
Date: |
Fri, 10 Feb 2017 16:15:35 +1100 |
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-unknown-linux-gnu'
-DCONF_VENDOR='unknown' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib -D_FORTIFY_SOURCE=2
-march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong
-DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/bin'
-DSTANDARD_UTILS_PATH='/usr/bin' -DSYS_BASHRC='/etc/bash.bashrc'
-DSYS_BASH_LOGOUT='/etc/bash.bash_logout' -Wno-parentheses -Wno-format-security
uname output: Linux star 4.9.6-1-ARCH #1 SMP PREEMPT Thu Jan 26 09:22:26 CET
2017 x86_64 GNU/Linux
Machine Type: x86_64-unknown-linux-gnu
Bash Version: 4.4
Patch Level: 11
Release Status: release
Description:
A segmentation fault occurs when nesting several thousand heredocs, as in
the example in the Repeat-By section. I have tested this on several
different distros, OSes and versions, all of them are affected. From memory,
those were OS X, Linux, Windows (cygwin), and a jailbroken iPad.
I did not include it in the title as I'm not knowledgable enough to be sure,
but I believe this is a stack overflow, because it dies after creating tens
of thousands of stack frames, and changing `ulimit -s` seems to affect how
many heredocs trigger the bug.
I actually discovered this maybe 6 months ago, but wasn't sure whether to
report it, as it seems like you'd expect a program to break when you abuse
it like this. But after noticing zsh doesn't segfault, I decided to write
this up.
When analysing a coredump from a version of bash here's the bottom (or top?)
of the stack:
(gdb) > bt -7
#82717 0x0000000000437ed3 in execute_command_internal
(command=0x27e9188, asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x27e94e8) at execute_cmd.c:971
#82718 0x0000000000437183 in execute_command (command=0x27e9188) at
execute_cmd.c:405
#82719 0x000000000043b23a in execute_for_command
(for_command=0x27e9208) at execute_cmd.c:2802
#82720 0x0000000000437cbc in execute_command_internal
(command=0x27e9248, asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x27e91e8) at execute_cmd.c:883
#82721 0x0000000000437183 in execute_command (command=0x27e9248) at
execute_cmd.c:405
#82722 0x0000000000422243 in reader_loop () at eval.c:180
#82723 0x000000000041fe4a in main (argc=0x2, argv=0x7ffdc73837e8,
env=0x7ffdc7383800) at shell.c:792
And it just loops those execute_command ones about 80k times, until it gets
to these frames at the more recent end of the stack:
(gdb) > bt 10
#0 0x00000000004e60fb in morecore (nu=0x27e9428) at malloc.c:554
#1 0x00000000004e6582 in internal_malloc (n=0x20, file=0x4ecf00
"unwind_prot.c", line=0xe7, flags=0x1) at malloc.c:786
#2 0x00000000004e6fd2 in sh_malloc (bytes=0x20, file=0x4ecf00
"unwind_prot.c", line=0xe7) at malloc.c:1187
#3 0x000000000048f69e in sh_xmalloc (bytes=0x20, file=0x4ecf00
"unwind_prot.c", line=0xe7) at xmalloc.c:183
#4 0x00000000004704a2 in add_unwind_protect_internal (cleanup=0x0,
arg=0x4ea69e "execute-command") at unwind_prot.c:231
#5 0x0000000000470269 in without_interrupts (function=0x470445
<add_unwind_protect_internal>, arg1=0x0, arg2=0x4ea69e "execute-command") at
unwind_prot.c:123
#6 0x0000000000470321 in add_unwind_protect (cleanup=0x0, arg=0x4ea69e
"execute-command") at unwind_prot.c:160
#7 0x0000000000470297 in begin_unwind_frame (tag=0x4ea69e
"execute-command") at unwind_prot.c:133
#8 0x000000000043714b in execute_command (command=0x1f13908) at
execute_cmd.c:401
#9 0x000000000043abd0 in execute_connection (command=0x1f13a48,
asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x2dccec8) at execute_cmd.c:2592
#10 0x0000000000437ed3 in execute_command_internal (command=0x1f13a48,
asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x2dccec8) at execute_cmd.c:971
#11 0x0000000000437183 in execute_command (command=0x1f13a48) at
execute_cmd.c:405
#12 0x000000000043abd0 in execute_connection (command=0x1f13b88,
asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x2dccd28) at execute_cmd.c:2592
#13 0x0000000000437ed3 in execute_command_internal (command=0x1f13b88,
asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x2dccd28) at execute_cmd.c:971
#14 0x0000000000437183 in execute_command (command=0x1f13b88) at
execute_cmd.c:405
#15 0x000000000043abd0 in execute_connection (command=0x1f13cc8,
asynchronous=0x0, pipe_in=0xffffffff, pipe_out=0xffffffff,
fds_to_close=0x2dccd08) at execute_cmd.c:2592
(More stack frames follow...)
And here's a full backtrace of that stack frame it died in, in morecore():
(gdb) > bt full 1
#0 0x00000000004e60fb in morecore (nu=0x27e9428) at malloc.c:554
mp = 0x3
nblks = 0x0
siz = 0x27e9388
sbrk_amt = 0x0
set = {
__val = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
}
oset = {
__val = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
}
blocked_sigs = 0x0
(More stack frames follow...)
Repeat-By:
This function will create 40k heredocs. Pipe it to bash, or redirect it to a
file and get bash to run it.
function heredoc_abuse {
num=${1:-40000}
printf -- "---> stack size: %d\n" $(ulimit -s) >&2
printf -- "---> generating %d heredocs %s\n" $num >&2
for ((i=1; i<=$num; i++)); do printf "for x$i in 1; do cat <<
A\n"; done
for ((i=1; i<=$num; i++)); do printf "A\n"; done
for ((i=1; i<=$num; i++)); do printf "done\n"; done
}
The heredocs look like this:
for x1 in 1; do cat << A
for x2 in 1; do cat << A
for x3 in 1; do cat << A
A
A
A
done
done
done
~~~~~~~~~~~~~~~~~~~~~~
Tom Shaddock
hello@apricot.pictures
- Segmentation fault when nesting several thousand heredocs,
Tom <=