[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

MAIL_USE_FLOCK and Debian.

From: Rob Browning
Subject: MAIL_USE_FLOCK and Debian.
Date: Sat, 15 Feb 2003 01:02:11 -0600
User-agent: Gnus/5.090008 (Oort Gnus v0.08) Emacs/21.2 (i386-pc-linux-gnu)

On a Debian system, the setting of MAIL_USE_FLOCK in s/gnu-linux.h is
guaranteed to be wrong, so right now, Debian applies the patch below
and adds -DDEBIAN to CFLAGS in debian/rules.  Of course this fix isn't
appropriately general because it only works for people using
debian/rules to build Emacs packages.  So I'd like to figure out what
the "right" way to fix it would be.  The general rule is that on a
Debian system, you should only use liblockfile, nothing else.

Though I have patches that make sure that liblockfile is used and that
the location of the mail spool is detected correctly, we still need an
acceptable way to make sure that on a Debian system, MAIL_USE_FLOCK
isn't defined.

My patches also add support for dynamic detection and use of
mail-spool-directory variable, though I'm not sure those changes, as
they stand, will necessarily be wanted upstream.  I discussed them
with Gerd some time back, but now I need to finish re-evaluating them
against current CVS, and to discuss them here to make sure they're
still desired.


--- emacs21-21.2.orig/src/s/gnu-linux.h
+++ emacs21-21.2/src/s/gnu-linux.h
@@ -130,7 +130,14 @@
    programs.  I assume that most people are using newer mailers that
    have heard of flock.  Change this if you need to. */
+/* On Debian/GNU/Linux systems, configure gets the right answers, and
+   that means *NOT* using flock.  Using flock is guaranteed to be the
+   wrong thing. See Debian Policy for details. */
+#ifdef DEBIAN
+#  undef MAIL_USE_FLOCK
+#  define MAIL_USE_FLOCK

Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592  F9A0 25C8 D377 8C7E 73A4

reply via email to

[Prev in Thread] Current Thread [Next in Thread]