[PLUG] Efficiency of Compiled vs. Interpreted Languages

glen e. p. ropella gepr at tempusdictum.com
Fri Feb 5 18:36:42 UTC 2010


Thus spake S. Michael Convey circa 10-02-05 09:28 AM:
> I have found this debate fascinating. However, I am not a developer - but
> aspire to move in that direction. I forwarded some of this debate to a
> couple of developer friends of mine and here are their thoughts:

Just be aware that the rhetoric is based on a false dichotomy between
"interpreted" vs. "compiled".  All languages, including C, are a little
bit interpreted and all languages, including PHP, are a little bit
compiled.  The only difference is where one draws the line between human
and computer.  I.e. the real spectrum behind the false dichotomy is the
extent to which there's an isomorphism between the source code and the
machine instructions.

An interpreted language simply hooks compiled units together with human
readable glue, which is also compiled.  A compiled language doesn't have
any human readable sentences in it.  But the extent to which a language
is compiled matters quite a bit.  C is compiled all the way down to the
machine level.  But even then, there are gray areas like inlining vs.
function calls and branch prediction.  Both function calls and branch
prediction can be loosely considered a form of "interpretation".

So, as Randall and Eric point out, doing more work takes more time.
Regardless of how you get there (interpreted or compiled), if it's
isomorphic to the machine code, your program will be fast.  If it's full
of sugar that makes it easier for humans to use and grok, then your
program will be slow.

A more useful discussion would consist of use-cases that require the
dynamism of interpreted languages, use-cases that require the
determinism of compiled languages, and use-cases that require a healthy
mix of the two.

Relating this back to gnu/Linux, it seems that the unix philosophy of
building large libraries of small and focused utilities that are
composed to accomplish complex tasks tends more toward the interpretive
side of the (false) dichotomy.  Windows seemed to have initially adopted
the monolithic attitude of hiding everything behind the curtain.  (We're
talking about "seeing into the machine", here.)  This is evident even in
things like window managers.  In windows, you get 1.  In gnu/linux you
get... well, lots of choices and the ability to build your own hybrid.

Apple seems to have graduated away from the monolithic approach with
their adoption of unix; but they maintain the idea that choice is a bad
(or at least confusing) thing and hide the power of unix behind a
translucent curtain.  I think they've done a GREAT service to their user
base, which consists of a smooth spectrum of types who never want to
look behind the curtain all the way down to total dorks who must see
into the machine.  And distributions like ubuntu are beginning to allow
that full spectrum to use gnu/linux, as well.

Going back up to the topic of the internet (not the web), the
loosy-goosy user-controllable collection of resources, seems like a
natural fit for use-cases calling for dynamism and, especially,
resilience and graceful failure.  But with the ubiquity of high
bandwidth pipes and the Windows monolithic/obfuscating/delegating
attitude, we have a bit of a conflict between the users' expectations
(e.g. "the internet is just another desktop application") and the
reality (e.g. "the internet is a loose and incommensurate collection of
programs listening to various ports).  Unfortunately, www users (subset
of internet users) don't want to see behind the curtain at all.  They're
just _begging_ obfuscators like microsoft to do all that computery stuff
for them.

The perennial question is: do the morlocks try to force the eloi to look
at and understand the machines or do they allow them to live out their
lives in blissful ignorance (until we get hungry)?

-- 
glen e. p. ropella, 971-222-9095, http://tempusdictum.com




More information about the PLUG mailing list