[PLUG] mixing netmasks

Aaron Ten Clay aaron at madebyai.com
Wed May 10 15:36:16 UTC 2006


On Wed May 10 2006 06:45, Brian Beattie wrote:
> Somebody I know is proposing to mix netmasks on the same physical
> network.  That is to say all machines that are supposed to talk to each
> other (servers and clients of those servers) would be in 10.1.1.0/24.
> Engineering machines (prototypes, machines under test) would be in
> 10.1.0.0/16 in the 10.1.2 range and engineering workstations, which
> needed to talk to the servers and the test macnines would be in the
> 10.1.1.0 range with a netmask of 255.255.0.0
> 
> I told him that that was a bad idea, and I think it would cause
> problems.  Am I talking through my hat again? (most of the machines on
> the 10.1.1.0/24 net are M$ boxen and most of the engineering
> workstations are Linux.

The only "problem" you'll run into is that any machines as 10.1.1.0/24 (the windows boxen) will NOT be able to talk to machines that are 10.1.anythingbut1.X because their netmask tells them that address is not local and they must use a gateway to reach it. Having a router routeback on the same interface is considered very bad practice and there's absolutely no reason to do it.

As suggested by another listee, routers were invented precisely for this situation. Either give all the machines on the segment the same netmasks, or make two segments.

If the idea is to make two "networks" and prevent the machines from talking, you must have separate physical networks because anyone can change the IP address or spoof ARP packets or any other malicious methods of seeing data you don't want them to. At the very least, use two completely separate ranges (10.1.1.0/24 and 10.1.2.0/24 or the like) instead of overlapping two ranges with different sizes.

-Aaron
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 190 bytes
Desc: not available
URL: <http://lists.pdxlinux.org/pipermail/plug/attachments/20060510/e22d2452/attachment.asc>


More information about the PLUG mailing list