[PLUG] Cache usage and integer computation speed
Keith Lofstrom
keithl at kl-ic.com
Tue Nov 22 19:25:53 UTC 2005
This is more a computer science question than a Linux question, but
there are probably folks in PLUG that can answer it.
I am working on an integer arithmetic problem that can be accelerated
with table lookups; for example, a randomly accessable 256Kbyte table
accelerates the computation by about 25 times. Much larger tables make
the calculation somewhat faster, roughly proportional to the logarithm
of the table size. I can make the table larger or smaller, impacting
compute time somewhat, but (I assume) impacting swap time and cache
misses significantly. Overall program size is small, I/O is very
small, and the calculation is quite amenable to parallelization.
It is not code-breaking, but it is that kind of intense arithmetic.
The calculation will run for a very long time on a lightly loaded
Linux machine with a P4 and a 512K cache. I will probably use GCC
and a recent 2.6.X kernel, though I am told the Intel compiler might
give better results (debatable; a separate question).
So the question: what percentage of cache size should I target
my application and table to run in? That is, how big can a chunk
of code and data get before performance starts to degrade?
Keith
--
Keith Lofstrom keithl at keithl.com Voice (503)-520-1993
KLIC --- Keith Lofstrom Integrated Circuits --- "Your Ideas in Silicon"
Design Contracting in Bipolar and CMOS - Analog, Digital, and Scan ICs
More information about the PLUG
mailing list