亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

Chinaunix

標(biāo)題: Client/Server Design Alternatives [打印本頁(yè)]

作者: apony    時(shí)間: 2007-07-09 15:49
標(biāo)題: Client/Server Design Alternatives
Unix Network Programming Volume 1,Third Edition The Sockets Networking API
Chapter 30. Client/Server Design Alternatives

gave us this figure:

Row Server      Description     Process Control CPU time(Difference from baseline)
0 Iterative Server (baseline) 0.0
1 Concurrent Server, one fork per client request 20.90
2 Pre-fork, each child calling accept 1.80
3 Pre-forking, file locking around accept 2.07
4 Pre-forking, thread mutex locking around accept 1.75
5 Pre-fork, parent passing descriptor to child 2.58
6 One thread per client request 0.99
7 Pre-threaded, mutex locking to protect accept 1.93
8 Pre-threaded, main thread calling accept 2.05

but 30.11 said:
Comparing rows 6 and 7 in Figure 30.1, we see that this latest version of our server is faster than the create-one-thread-per-client version.

so i think it must be something wrong with the data in row 7.

i found in the internet:

Dr. Ayman A. Abdel-Hamid
Computer Science Department
Virginia Tech

think "1.93" is doubtable in his speech.

Also i tested the 8 examples using the same parameters, i made this figure:
Row  Fast  Cost Time
1 8 3.215505
2 3 0.596908
3 6 0.897862
4 1 0.565911
5 7 0.938856
6 5 0.730888
7 2 0.571912
8 4 0.628903

so i think the data in the row 6 may be 1.99.
As the figure in the book showed, "Pre-fork, each child calling accept" is faster than "Pre-threaded, mutex locking to protect accept", but my test result is different.

what is your opinion?
作者: apony    時(shí)間: 2007-07-12 11:34
是我描述的不夠清楚?
還是大家對(duì)這個(gè)話題不感興趣?




歡迎光臨 Chinaunix (http://72891.cn/) Powered by Discuz! X3.2