Andre,
(This is an answer to both your posts)
Both your points in your first post are very true. The ideal concurrency
level is likely >1 on a real-world setup where requests vary widely in
resulting CPU time.
However, it's also likely much less than 20, which seems to be the
current case without concurrency control. The ideal level is probably
best determined by experimentation.
Regarding your second posting, there are a number of things one can do
to fine-tune this approach, when and if fine-tuning is deemed necessary.
One could for instance reserve a certain number of locks for known short
running requests (like page loads), assuring these will always be fast.
It's less important to ensure the speed of long running page executions,
like full text searches, since the user expects these to take more time.
/E23
Andre Engels engels-at-uni-koblenz.de |wikipedia| wrote:
This goes wrong on 2 counts:
1. If there is any amount of true parallellism going on, the time for n
concurrent connections will be less than n*t seconds. If during the
execution, there is time spent waiting for an answer from another
machine, the amount of parallellism may well be large.
2. Processes may differ in necessary execution time. Doing them subsequently
increases the amount of time that 'fast' processes have to wait for
'slow'
processes. For example, running a 1 millisecond and a 10 millisecond
process parallel will give 2 milliseconds for the fast, and 11 milliseconds
for the slow process. If run in sequence, this will be (assuming that
either process gets first 50% of the time) on average 5.5 milliseconds for
the fast, and 10.5 milliseconds for the slow process.
Andre Engels