Review Board 1.7.22


WIP: Improve multi-thread performance by reducing the duration the Queue message lock is held.

Review Request #4232 - Created March 7, 2012 and submitted

Kenneth Giusti
qpid-3890
Reviewers
qpid
aconway, astitcher, gordon, tross
qpid
In order to improve our parallelism across threads, this patch reduces the scope over which the Queue's message lock is held.

I've still got more testing to do, but the simple test cases below show pretty good performance improvements on my multi-core box.  I'd like to get some early feedback, as there are a lot of little changes to the queue locking that will need vetting.

As far as performance is concerned - I've run the following tests on an 8 core Xeon X5550 2.67Ghz box.  I used qpid-send and qpid-receive to generate traffic.  All traffic generated in parallel.
All numbers are in messages/second - sorted by best overall throughput.

Test 1: Funnel Test (shared queue, 2 producers, 1 consumer)

test setup and execute info:
qpidd --auth no -p 8888 --no-data-dir --worker-threads=3
qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=1200000000 --max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
qpid-receive -b $SRC_HOST -a srcQ1 -f -m 4000000 --capacity 2000 --ack-frequency 1000 --print-content no --report-total &
qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &
qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &

trunk:                        patched:
tp(m/s)                       tp(m/s)

S1    :: S2    :: R1          S1    :: S2    :: R1
73144 :: 72524 :: 112914      91120 :: 83278 :: 133730
70299 :: 68878 :: 110905      91079 :: 87418 :: 133649
70002 :: 69882 :: 110241      83957 :: 83600 :: 131617
70523 :: 68681 :: 108372      83911 :: 82845 :: 128513
68135 :: 68022 :: 107949      85999 :: 81863 :: 125575



Test 2: Quad Client Shared Test (1 queue, 2 senders x 2 Receivers)

test setup and execute info:
qpidd --auth no -p 8888 --no-data-dir --worker-threads=4
qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=1200000000 --max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
qpid-receive -b $SRC_HOST -a srcQ1 -f -m 2000000 --capacity 2000 --ack-frequency 1000 --print-content no --report-total &
qpid-receive -b $SRC_HOST -a srcQ1 -f -m 2000000 --capacity 2000 --ack-frequency 1000 --print-content no --report-total &
qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &
qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &

trunk:                               patched:
tp(m/s)                              tp(m/s)

S1    :: S2    :: R1    :: R2        S1    :: S2    :: R1    :: R2
52826 :: 52364 :: 52386 :: 52275     64826 :: 64780 :: 64811 :: 64638
51457 :: 51269 :: 51399 :: 51213     64630 :: 64157 :: 64562 :: 64157
50557 :: 50432 :: 50468 :: 50366     64023 :: 63832 :: 64092 :: 63748

**Note: pre-patch, qpidd ran steadily at about 270% cpu during this test.  With the patch, qpidd's cpu utilization was steady at 300%



Test 3: Federation Funnel (two federated brokers, 2 producers upstream, 1 consumer downstream)

test setup and execute info:
qpidd --auth no -p 8888 --no-data-dir --worker-threads=3 -d
qpidd --auth no -p 9999 --no-data-dir --worker-threads=2 -d
qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=600000000  --max-queue-count=2000000 --flow-stop-size=0 --flow-stop-count=0
qpid-config -b $SRC_HOST add queue srcQ2 --max-queue-size=600000000  --max-queue-count=2000000 --flow-stop-size=0 --flow-stop-count=0
qpid-config -b $DST_HOST add queue dstQ1 --max-queue-size=1200000000 --max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
qpid-config -b $DST_HOST add exchange fanout dstX1
qpid-config -b $DST_HOST bind dstX1 dstQ1
qpid-route --ack=1000 queue add $DST_HOST $SRC_HOST dstX1 srcQ1
qpid-route --ack=1000 queue add $DST_HOST $SRC_HOST dstX1 srcQ2
qpid-receive -b $DST_HOST -a dstQ1 -f -m 4000000 --capacity 2000 --ack-frequency 1000 --print-content no --report-total &
qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &
qpid-send -b $SRC_HOST -a srcQ2 -m 2000000 --content-size 300 --capacity 2000 --report-total --sequence no --timestamp no &


trunk:                        patched:
tp(m/s)                       tp(m/s)

S1    :: S2    :: R1          S1    :: S2    :: R1
86150 :: 84146 :: 87665       90877 :: 88534 :: 108418
89188 :: 88319 :: 82469       87014 :: 86753 :: 107386
88221 :: 85298 :: 82455       89790 :: 88573 :: 104119

Still TBD:
  - fix message group errors
  - verify/fix cluster
unit test (non-cluster)
performance tests as described above.
Review request changed
Updated (March 19, 2012, 10:22 p.m.)
I've applied most of the review recommendations, reverting some of the changes that lead to race conditions.  The result is a bit slower than the original, but shows promise esp. in the multi-consumer/multi-producer case.

Rerun of the tests given the latest changes (against lastest trunk):

Test1
trunk                              qpid-3890
tp(m/s)                            tp(m/s)
70907 :: 70776  :: 112450          81417 :: 75767  :: 115527
73190 :: 71805  :: 107750          81132 :: 75067  :: 114962
67628 :: 66340  :: 106461          75638 :: 73906  :: 110000
64801 :: 64055  :: 104864          77083 :: 75184  :: 106368
69654 :: 66441  :: 103480          72933 :: 72518  :: 105773



Test2
trunk                                   qpid-3890
49921 :: 49761  :: 49724  :: 49674      60263 :: 60234 :: 60211 :: 60111
49672 :: 49384  :: 49372  :: 49279      59781 :: 59545 :: 59770 :: 59529
48138 :: 48107  :: 48097  :: 48056      59792 :: 59506 :: 59571 :: 59426
48228 :: 48216  :: 48061  :: 48009      59002 :: 58847 :: 58972 :: 58779




Test3
trunk                              qpid-3890
84372 :: 84320  :: 92689           81960 :: 80475 :: 96154
88671 :: 84633  :: 85234           83841 :: 82777 :: 86037
81302 :: 81073  :: 79898           83740 :: 81476 :: 85572


I'd like to put this on trunk now that we've branched - the earlier the better.

Opinions?
Ship it!
Posted (March 20, 2012, 2:06 p.m.)
I would still like to see a comment in Queue.h that states explicitly which member variables are covered by which locks - while its fresh in your mind :)