[PLUG] To secondary MX or not to
Michael Robinson
michael at robinson-west.com
Sat Apr 19 11:32:01 UTC 2003
On Wednesday 16 April 2003 03:56 pm, you wrote:
> The advice should've been not to have a secondary MX if your primary MX
> is always available.
>
> The reasons spammers target secondary MX servers are:
> 1. They hope the policies are lenient on the secondary (usually due to
> not having authoritative account information held locally).
> 2. They believe the secondary will not be faster due to most mail routing
> through the primary.
> 3. They don't think anyone examines the logs on the secondary.
>
> The reasons to have a secondary MX anyway are:
> 1. Scheduled or unscheduled maintenance on the primary MX.
> 2. Network outages.
> 3. Too much load on the primary.
>
Wouldn't the obvious thing to do be setting the secondary up with
the same rules as the primary. Network filesystems
are popular but I wouldn't want to use them from the primary
because the event of the primary going down would take the
secondary out as well. The best way to set a secondary up from
what I can see is with the same rules as the primary and with
the ability to substitute the secondary for the primary. The latter
may mean that people have to set up another imap account
unless there is a way to transparently have one imap server
address go to the exchanger currently serving the mail. If
the primary is out it would be the secondary.
The more I think about it the more I realize there should probably
be a set of redundant network file servers behind gateway/Internet/mail
servers which distribute authentication services and protect data
used on the outside boxes. If you put these cross server
authentication services on your outside boxes there is the problem
of having to be able to distribute information from
a server that is live. This becomes a problem when none of
these machines are dedicated to be up for authentication. If
you distribute the same information from all of your servers
you have just defeated the point of distributing authentication
information and probably could have done it more easily by
manually programming each server's password file to be the same.
More servers means more power though I guess this can be
alleviated by using older computers. The other issue with
distributed authentication is that it is common to have some
accounts that are unique on each server that shouldn't be
available on all.
With the logs the easy solution is remote loggis available, but some consider
that practice taking advantage of
> spool space on the sending servers...
ing. There is one
thing I really don't like about this though and that's having the logs
from multiple machines dump into one set of files on the log server.
I would like to have each machine's logs in seperate files or at
least have the log files sorted so that in chronological order
there would all logs from machine A ... followed by all logs from m
achine N. This probably involves turning logging off, sorting the
logs, then restarting the log daemons. With the latter approach,
the end of the log would always be unsorted. It would be nice
instead of turning logging off to dump to a temporary log during the
sort. Then one would only sort the original log files once and
afterwards not dump to them directly, but dump to a temporary
instead. With two sets of temporaries that are recycled over
time, logging would never have to be suspended except when
space is exhausted. One temporary gets sorted and appended
to the main logs while the other catches new logs. Many
references say to put all logs into one file but this is ugly
because there are different types of logs which are easier
to see in their own files and the log file gets very long very
fast. I've seen emailers which probe logs, I'd much rather
figure out how to get a perl script to put graphics based on
the logs such as histograms, pie charts, etc. on a private
lan management web site. The easier it is to see important
events in the logs the more likely an administrator is to take
needed action reducing the security risk of providing any
service over potentially malicious networks.
I don't know about the speed issue. The cool thing to do
would be to have a mail server stop accepting mail when
it's jammed and open up again when it is say 20% free on
spool space. I would think that speed is a spammers
friend as they can send more junk to you unless the
purpose of spamming is to cause DoS. At the end of the
day connections for spammers can possibly be refused,
but what's to stop their flood of requests from taking away
all your bandwidth anyhow? One doesn't have to be
accepted to send a connection request. This seems like
an area where a network authority severing the spammers
connection to the Net is the only solution. There's
something called tarpitting where you allow conenction
but the connection goes into a black hole. I guess you'd
have to allow the spammer to flood the tar pit though which
probably means that the ISP has to do the tar pitting.
-- Michael C. Robinson
More information about the PLUG
mailing list