[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SEGFAULT if bash script make "source" for itself

From: bogun.dmitriy
Subject: Re: SEGFAULT if bash script make "source" for itself
Date: Thu, 28 Aug 2014 11:49:02 -0700

2014-08-28 11:30 GMT-07:00 Eric Blake <address@hidden>:

> On 08/28/2014 12:02 PM, address@hidden wrote:
> > IMHO any user action should not lead to SIGSEGV! I am not objecting
> against
> > recursive "sourse" itself. But when I got SIGSEGV from "bash", I have no
> > idea why this is happened. I have made recursive "sourse" by mistake and
> > spend a lot of time looking up what exactly lead to SIGSEGV.
> SIGSEGV is what happens on stack overflow, unless you integrate a stack
> overflow detector like GNU libsigsegv with your sources to catch the
> segv and replace it with a nice error message.
I know when and why program can get SIGSEGV.

> As to whether or not user code should be able to cause stack overflow,
> we can't prevent it.  Reliably preventing stack overflow would be
> equivalent to solving the Halting Problem, which we cannot do; so all we
> can do is detect when it happens.
If follow this logic - we shoul try to catch incorrect user behaviour... we
will got errors/signals from kernel.

Simple situation:
$ ((1/0))
bash: ((: 1/0: division by 0 (error token is "0")

Whey there is check on division by zero? We can predict this? - No. But we
can detect it... and we out nice, detailed error message.

So why I should got SIGSEGV instead of nice, detailed error message in
recursion? We can detect it?

> >
> > Put a configurable limit on the deep of recursive source. There is almost
> > no variant for legal usage of recursive source on deep... 10000 for
> > example. If someone need such recursion deep, he alway can raise limit or
> > turn it off by setting it to 0.
> The GNU Coding Standards state that GNU software cannot have arbitrary
> limits by default.  Any limit we pick, other than unlimited (your
> proposal of turning it to 0), would be an arbitrary limit for someone
> who has a machine with more memory and a larger stack.  So 0 is the only
> sane default, but that's no different than what we already have.

IBM produce new CPU - it solve infinite loop in 8 seconds.

How bigger amount of memory save from infinite recursion? It lead to bigger
delay before SIGSEGV and nothing else.

 $ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30847
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 30847
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
address@hidden ~ $

so... in real life we have a limits. Some of them turned off, but they
exists and can be ajusted.

And if I have an option, which I can change to some suitable value for me
and this can save me/show me good error message in case of infinite
recursion - I will use it. Other can leave it in ifinite position. We can
have 2 options - one set recursion level limit, other set action when this
limit is reached - deny deeper recursion / print warning.

> --
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org

reply via email to

[Prev in Thread] Current Thread [Next in Thread]