亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

  免費注冊 查看新帖 |

Chinaunix

  平臺 論壇 博客 文庫
12下一頁
最近訪問板塊 發(fā)新帖
查看: 8108 | 回復(fù): 10
打印 上一主題 下一主題

nginx真的有那么牛嗎?求救32G16核NGINX反向代理IIS,2000個連接,就不行了 [復(fù)制鏈接]

論壇徽章:
0
跳轉(zhuǎn)到指定樓層
1 [收藏(0)] [報告]
發(fā)表于 2013-12-17 22:33 |只看該作者 |倒序瀏覽
本帖最后由 輕逐微風(fēng) 于 2013-12-17 22:46 編輯

硬件:32G內(nèi)存,16線程CPU   
windows x7   nginx 1臺

nginx真的有那么牛比嗎?

100個線程,測試10秒:
  1. TIME_WAIT       656
  2. FIN_WAIT1       48
  3. FIN_WAIT2       35
  4. ESTABLISHED     271
  5. SYN_RECV        3
  6. CLOSING         6
  7. LAST_ACK        25
  8. LISTEN          19
復(fù)制代碼
  1. TIME_WAIT       1585
  2. FIN_WAIT1       154
  3. SYN_SENT        22
  4. FIN_WAIT2       8
  5. ESTABLISHED     1513
  6. SYN_RECV        2
  7. CLOSING         2
  8. LAST_ACK        11
  9. LISTEN          13
復(fù)制代碼
在本機測試PHP-FPM,開了500個server,1000子線程,用webbach 模擬1000個用戶,5秒左右,nginx錯誤日志就開始:
  1. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  2. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  3. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  4. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  5. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  6. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  7. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
  8. 2013/12/17 22:08:23 [error] 2205#0: *14999 connect() failed (110: Connection timed out) while connecting to upstream, client: upstream: "fastcgi://127.0.0.1:9999",
復(fù)制代碼
測試后端 的IIS服務(wù)器,也是模擬1000用戶,5秒左右,nginx報如下錯誤:
  1. 2013/12/17 22:18:02 [error] 4908#0: *79472 connect() failed (110: Connection timed out) while connecting to upstream, client: 124.165.229.214,, request: "GET /Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db HTTP/1.1", upstream: "http://ip地址:9595/Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db",
  2. 2013/12/17 22:18:15 [error] 4908#0: *79472 connect() failed (110: Connection timed out) while connecting to upstream, client: 124.165.229.214, , request: "GET /Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db HTTP/1.1", upstream: "http://ip地址:9595/Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db",
  3. 2013/12/17 22:18:27 [error] 4908#0: *79472 connect() failed (110: Connection timed out) while connecting to upstream, client: 124.165.229.214,, request: "GET /Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db HTTP/1.1", upstream: "http://ip地址:9595/Coder/GetRate.aspx?t=d43d1271-2c5a-47b2-a3bf-512e5db8c0db",
復(fù)制代碼
系統(tǒng)日志大量報time wait
  1. TCP: time wait bucket table overflow
  2. TCP: time wait bucket table overflow
  3. TCP: time wait bucket table overflow
  4. TCP: time wait bucket table overflow
  5. TCP: time wait bucket table overflow
  6. TCP: time wait bucket table overflow
  7. TCP: time wait bucket table overflow
  8. TCP: time wait bucket table overflow
  9. TCP: time wait bucket table overflow
  10. TCP: time wait bucket table overflow
  11. __ratelimit: 774 callbacks suppressed
  12. TCP: time wait bucket table overflow
  13. TCP: time wait bucket table overflow
  14. TCP: time wait bucket table overflow
  15. TCP: time wait bucket table overflow
  16. TCP: time wait bucket table overflow
  17. TCP: time wait bucket table overflow
  18. TCP: time wait bucket table overflow
  19. TCP: time wait bucket table overflow
  20. TCP: time wait bucket table overflow
  21. TCP: time wait bucket table overflow
復(fù)制代碼
/etc/sysctl.conf
  1. net.ipv4.tcp_max_syn_backlog = 65536
  2. net.core.netdev_max_backlog =  262144
  3. net.core.somaxconn = 32768

  4. net.core.wmem_default = 8388608
  5. net.core.rmem_default = 8388608
  6. net.core.rmem_max = 16777216
  7. net.core.wmem_max = 16777216
  8. net.ipv4.tcp_timestamps = 0

  9. net.ipv4.tcp_synack_retries = 2
  10. net.ipv4.tcp_syn_retries = 2

  11. net.ipv4.tcp_tw_recycle = 1
  12. #net.ipv4.tcp_tw_len = 1
  13. net.ipv4.tcp_tw_reuse = 1

  14. net.ipv4.tcp_mem = 94500000 915000000 927000000
  15. net.ipv4.tcp_max_orphans = 3276800

  16. net.ipv4.tcp_fin_timeout = 30
  17. net.ipv4.tcp_keepalive_time = 1200
  18. net.ipv4.ip_local_port_range = 1024  65535

  19. #net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
  20. #net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
  21. #net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
  22. net.ipv4.neigh.default.gc_thresh1 = 10240
  23. net.ipv4.neigh.default.gc_thresh2 = 40960
  24. net.ipv4.neigh.default.gc_thresh3 = 81920
  25. net.ipv4.tcp_max_tw_buckets =10000
  26. fs.file-max = 65535
  27. kernel.pid_max = 65536

  28. net.ipv4.tcp_rmem = 4096 4096 16777216
  29. net.ipv4.tcp_wmem = 4096 4096 16777216
  30. net.ipv4.conf.all.send_redirects = 0
  31. net.ipv4.conf.default.send_redirects = 0
  32. net.ipv4.conf.eth0.send_redirects = 0
  33. net.ipv4.conf.all.send_redirects = 0

  34. net.nf_conntrack_max = 6553600

  35. net.netfilter.nf_conntrack_tcp_timeout_established = 1200
復(fù)制代碼

論壇徽章:
0
2 [報告]
發(fā)表于 2013-12-18 14:30 |只看該作者
系統(tǒng)windows x7   ?

哥們跑在linux面再說吧

論壇徽章:
0
3 [報告]
發(fā)表于 2013-12-18 19:56 |只看該作者
linux下面,nginx不采用線程模型,epoll模型下,做反向代理,你的服務(wù)器硬件,2萬個連接應(yīng)該也沒問題,但是瓶頸這時候已經(jīng)在后端服務(wù)器了,后端扛不住,nginx應(yīng)該也會報connect fail,因為這時候后端服務(wù)器過于繁忙,nginx輪訓(xùn)upstream后找不到可用的服務(wù)會返回錯誤的。

論壇徽章:
0
4 [報告]
發(fā)表于 2013-12-19 21:36 |只看該作者
回復(fù) 2# miss998


    前端 centos 6 x64
    硬件 16核CPU,32G內(nèi)存


   后端IIS6

論壇徽章:
0
5 [報告]
發(fā)表于 2013-12-19 21:37 |只看該作者
本帖最后由 輕逐微風(fēng) 于 2013-12-19 21:39 編輯

回復(fù) 4# 輕逐微風(fēng)


有一個問題我應(yīng)該說一下,后端的單臺IIS,支撐登錄業(yè)務(wù)和上傳業(yè)務(wù)員一點問題都沒有,并發(fā)在6K左右,一旦加個前端,反而不爽了

user  nginx;
worker_processes  8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
#worker_cpu_affinity auto;
worker_rlimit_nofile 204800;

error_log   /var/log/nginx/error.log;
#error_log  /var/log/nginx/error.log  notice;
#error_log  /var/log/nginx/error.log  info;
pid        /var/run/nginx.pid;


events {
    worker_connections  204800;
    use epoll;
}


超時還是非常多

Active connections: 1922
server accepts handled requests
3621600 3621600 21329177
Reading: 13 Writing: 398 Waiting: 1511

論壇徽章:
0
6 [報告]
發(fā)表于 2013-12-19 21:43 |只看該作者
回復(fù) 4# 輕逐微風(fēng)


    前端上的連接數(shù)

     10 110.189.94.99
     10 113.31.27.203
     10 113.31.27.217
     10 124.225.62.42
     10 180.184.30.199
     10 49.115.131.146
     11 113.57.246.143
     11 222.222.226.141
     11 61.132.226.157
     12 110.75.152.1
     15 222.82.202.8
     20 183.14.28.20
     27 192.168.10.168
     28 192.168.10.166
   1569 192.168.10.173   登錄服務(wù)器

論壇徽章:
0
7 [報告]
發(fā)表于 2013-12-19 21:47 |只看該作者
[root@proxyA nginx]# strace -p 20730
Process 20730 attached - interrupt to quit
epoll_wait(8, {}, 1, 915)               = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0

論壇徽章:
0
8 [報告]
發(fā)表于 2013-12-19 21:47 |只看該作者
[root@proxyA nginx]# strace -p 20730
Process 20730 attached - interrupt to quit
epoll_wait(8, {}, 1, 915)               = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0
getsockopt(7, SOL_TCP, TCP_INFO, "\n\0\0\0\0\0\0\0@B\17\0\0\0\0\0\30\2\0\0\0\0\0\0\0\0\0\0\200\0\0\0"..., [104]) = 0
epoll_wait(8, {}, 1, 1000)              = 0

論壇徽章:
0
9 [報告]
發(fā)表于 2013-12-19 21:52 |只看該作者
本帖最后由 輕逐微風(fēng) 于 2013-12-19 21:53 編輯

socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 48
ioctl(48, FIONBIO, [1])                 = 0
epoll_ctl(8, EPOLL_CTL_ADD, 48, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=322895504, u64=140080681254544}}) = 0
connect(48, {sa_family=AF_INET, sin_port=htons(9595), sin_addr=inet_addr("192.168.10.173")}, 16) = -1 EINPROGRESS (Operation now in progr                                                           ess)
recvfrom(254, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(22, 0xa820a0, 8192, 0, 0, 0)   = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(156, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(364, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(477, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(75, 0xa820a0, 8192, 0, 0, 0)   = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(317, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(218, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(3, 0xa820a0, 8192, 0, 0, 0)    = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(92, 0xa820a0, 8192, 0, 0, 0)   = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(462, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(136, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(248, 0xa820a0, 8192, 0, 0, 0)  = -1 EAGAIN (Resource temporarily unavailable)

論壇徽章:
0
10 [報告]
發(fā)表于 2014-03-13 17:10 |只看該作者
”系統(tǒng)日志大量報time wait“

最近對靜態(tài)helloworld 做nginx壓力測試 ,在本地vmware虛擬上 也遇到了類似的情況,按照網(wǎng)上的解決辦法
修改/etc/sysctl.conf   增大net.ipv4.tcp_max_tw_buckets

但貌似不起作用,一同求解
您需要登錄后才可以回帖 登錄 | 注冊

本版積分規(guī)則 發(fā)表回復(fù)

  

北京盛拓優(yōu)訊信息技術(shù)有限公司. 版權(quán)所有 京ICP備16024965號-6 北京市公安局海淀分局網(wǎng)監(jiān)中心備案編號:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年舉報專區(qū)
中國互聯(lián)網(wǎng)協(xié)會會員  聯(lián)系我們:huangweiwei@itpub.net
感謝所有關(guān)心和支持過ChinaUnix的朋友們 轉(zhuǎn)載本站內(nèi)容請注明原作者名及出處

清除 Cookies - ChinaUnix - Archiver - WAP - TOP