[PLUG] I wonder if I can find an opinion here...
Wil Cooley
wcooley at nakedape.cc
Sat Jul 15 01:06:47 UTC 2006
On Fri, 2006-07-14 at 16:23 -0700, Steve Bonds wrote:
Here are my additions, but they mostly apply to Solaris (8 and older)
and AIX; I don't have much experience with OS X, HP-UX and just a
modicum of experience with ancient versions of SCO OpenServer, Data
General's DG-UX, NeXTSTEP, BSDi's BSD/OS, Irix.
> GAIN:
> + lower cost per unit of performance
> + easier hiring (lots of linux folks around)
> + consistent OS from desktop to server
> + avoid licensing headaches
+ Great platform for developing (or developing with) and deploying
open source applications, languages.
* Solaris is probably much better these days with the work of the
Blastwave (http://www.blastwave.org/). With SFW it was hit or
miss.
* AIX, even with AIX Toolbox (http://sf.net/projects/aixtoolbox),
sucks.
* Most modern Linux distributions have almost all the applications
you want readily available, either directly from the distributor
or from a 3rd party (like RPMforge).
* Most applications are developed first on Linux and then ported.
Some are developed on *BSD and ported, but they're usually ported
much more quickly and more often.
* (Nearly) guaranteed to have access to the compiler that built the
rest of the system, so building applications that use libraries
that are sensitive compiler maker and version is less painful
(think OpenSSL, Perl modules).
+ Sane patch and package management (well, except for SUSE). This
should be both here and in the previous section. When I don't have
a package already available, I can build it and without too much
suffering, penance and ritual sacrifice. AIX at least supports RPM,
but it's an ancient version. Good luck using a network-installer
like apt or yum. (Well, AIX has NIM, which is kinda okay. It's
also kinda shitty.) It's also generally possible to update a Linux
system without rebooting; userland updates aren't closely tied to
kernel updates. Not so with AIX or Solaris generally.
+ Modern, flexible userland. At one time I thought I should
discipline myself to using ksh with vi mode, but I've since realized
that my life is too short and my typing too bad.
* The proprietary UNIX systems lack a lot of things that seem
pretty basic in Linux systems--log rotation,
who's-got-that-port-open (netstat/lsof), vim, GNU utilities,
working top, tcpdump on localhost (couldn't do that on Solaris).
+ Depending on what system you get, the ability to replace a failed
piece of hardware within an hour or two.
+ Active, vibrant communities of users in a myriad of
channels--mailing lists, web sites, IRC, IM (anyone else tried Qunu
yet?), Usenet. There's an AIX channel on Freenode and an okay
newsgroup; Solaris has a more active IRC channel and a few more
lists, but it's still paltry.
+ Periodicals. Oh, SysAdmin is okay, I guess, but I have yet to find
either web or paper periodicals as good as LWN, LJ, LG, and the
myraid of other newsblogs like linux.com, slashdot, etc. There used
to be UNIX Review, which became Server/Workstation Something, then
disappeared. There used to be AIXpert, which ultimately became a
column in UNIX Review, and went down with it. I guess there's an
AIX journal printed by the same people that print the IBM zSeries
mainframe journal I see a few people around the office, but I'm
prejudiced that a mainframe publisher can really speak UNIX.
+ Sane, sensible defaults. I guess there's a lot more concern for
legacy with proprietary UNIX systems, which explains why the default
syslog configurations for AIX and Solaris are everywhere. /var/adm,
WTF?? The FHS (nee FSSTND) is really wonderful and I wish
proprietary vendors would get on board.
+ Sane configuration and init system. There are some nice things
about the ODM in AIX (it's a configuration database, not to be
compared with the Windows registry), but most SysV systems are
lame. rc.config is better than that, and that kinda sucks too. The
SRC in AIX is kinda nice (it's a system starter like DJB's
daemontools) and I hear there's something nice in Solaris 10. But
the init scripts they do have, when they have them, look like they
were written by lazy interns.
I could probably go on, but it's Friday evening and I've got things to
do.
> LOSE:
> + app certification only on UNIX/Windows (this is becoming less common)
> + serial console that always works
> + proprietary UNIX iron hardware diagnostics
> + single-image scaling to large numbers of CPUs
> + Enterprise-quality OS support from vendor
> The hardware diags built into dedicated UNIX servers can be impressive
> when compared to those in the x86 world. For example, the exact DIMM
> that's tossing the single-bit errors can be identified. This sure
> beats DIMM-swapping between memtest86+ runs! If you need to take a
> memory dump for debugging purposes, on some servers you just push a
> button and it dumps. The ability to achieve tight integration between
> the hardware and the OS can be an advantage-- albeit a costly one.
Hm, a lot of the depends on who's making the PC. The expensive big iron
from the proprietary monoliths have a lot of the same capabilities. I
used to be a lot more impressed with it than I am now. The higher-end
stuff has hot-swappable CPUs, DIMMs, etc.
I'm especially unimpressed with the failure rate of absurdly expensive
hardware. For that matter, I'm unimpressed with the failure rate of
their OSes too--I managed to kill AIX the other day with a simple
software installation.
> Many UNIX systems routinely run applications on servers with over a
> hundred processors. Linux isn't there yet. Clustering is not always
> a viable alternative, either.
Other than the old Cray vector machines, I don't think even the
highest-end UNIX hardware goes to a hundred--the Sun Fire E25k goes to
72 CPUs, with something like hyperthreading to make 144.
http://www.sun.com/servers/highend/sunfire_e25k/index.jsp
The p5 595 only goes to 64 cores, plus 2x Symmetric Multithreading:
http://www-03.ibm.com/servers/eserver/pseries/hardware/highend/595.html
Sure, there are bigger than PCs; the biggest xSeries 4 CPU, which is 8
cores (+ 2 hyperthreading)
But in most cases, at least with the IBM pSeries systems I use, boxes
that big are used to make partitions of smaller hosts--the biggest LPAR
in use is 4 dedicated CPUs. The problem is that a lot of workloads
still don't scale across lots and lots of CPUs as one might like.
Sure, I get nervous hearing about people using 32GB of memory in a PC
running Linux; unless I had a lot of support and wasn't fighting a
I'll-show-you-Linux-on-a-PC-can-be-just-as-good fight, I wouldn't try
it.
For running huge databases and workloads like that, a pSeries or Sun
running their respective OSes is probably a good choice. But front-end
and middle-tier application layers? Much better on Linux; these tend to
cluster or distribute well.
> Lastly, my support experiences with RedHat have been awful. I get
> better help from the 'net at large. The proprietary OS vendors use
> their support as a selling point and it's generally good, and in some
> cases can be excellent.
There are few things in the world I'd like to do less than calling
technical support, waiting on hold, hunting around for the right
customer or license number, explaining my problem to half a dozen people
over the telephone, most of whom just route me to someone else, never
explain the details about went wrong, rarely communicate back to
engineering (esp w/stuff that's not system-critical, like the fact that
their C headers are brain damaged and #define vaguely common words like
'open') and in almost all support cases I've seen, they were solved by a
hardware replacement, if I didn't solve it myself.
Wil
--
Wil Cooley <wcooley at nakedape.cc>
Naked Ape Consulting, Ltd. <http://nakedape.cc>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.pdxlinux.org/pipermail/plug/attachments/20060714/401d5c43/attachment.asc>
More information about the PLUG
mailing list