欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費電子書(shū)等14項超值服

開(kāi)通VIP
TCP Tuning Guide - Linux TCP Tuning
Version 1.0 - Last published Feb 13, 2006
Search
Linux TCP Tuning

There are a lot of differences between Linux version 2.4 and 2.6, so first we‘ll cover the tuning issues that are the same in both 2.4 and 2.6. To change TCP settings in, you add the entries below to the file /etc/sysctl.conf, and then run "sysctl -p".

Like all operating systems, the default maximum Linux TCP buffer sizes are way too small. I suggest changing them to the following settings:

  # increase TCP max buffer size  net.core.rmem_max = 16777216  net.core.wmem_max = 16777216  # increase Linux autotuning TCP buffer limits  # min, default, and max number of bytes to use  net.ipv4.tcp_rmem = 4096 87380 16777216   net.ipv4.tcp_wmem = 4096 65536 16777216

Note: you should leave tcp_mem alone. The defaults are fine.

Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following:

     ifconfig eth0 txqueuelen 1000

I‘ve seen increases in bandwidth of up to 8x by doing this on some long, fast paths. This is only a good idea for Gigabit Ethernet connected hosts, and may have other side effects such as uneven sharing between multiple streams.


Linux 2.4

Starting with Linux 2.4, Linux has implemented a sender-side autotuning mechanism, so that setting the opitimal buffer size on the sender is not needed. This assumes you have set large buffers on the recieve side, as the sending buffer will not grow beyond the size of the recieve buffer.

However, Linux 2.4 has some other strange behavior that one needs to be aware of. For example: The value for ssthresh for a given path is cached in the routing table. This means that if a connection has has a retransmition and reduces its window, then all connections to that host for the next 10 minutes will use a reduced window size, and not even try to increase its window. The only way to disable this behavior is to do the following before all new connections (you must be root):

       sysctl -w net.ipv4.route.flush=1

More information on various tuning parameters for Linux 2.4 are available in the Ipsysctl tutorial .


Linux 2.6

Starting in Linux 2.6.7 (and back-ported to 2.4.27), BIC TCP is part of the kernel, and enabled by default. BIC TCP helps recover quickly from packet loss on high-speed WANs, and appears to work quite well. A BIC implementation bug was discovered, but this was fixed in Linux 2.6.11, so you should upgrade to this version or higher.

Linux 2.6 also includes and both send and receiver-side automatic buffer tuning (up to the maximum sizes specified above). There is also a setting to fix the ssthresh caching weirdness described above.

There are a couple additional sysctl settings for 2.6:

   # don‘t cache ssthresh from previous connection   net.ipv4.tcp_no_metrics_save = 1   # recommended to increase this for 1000 BT or higher   net.core.netdev_max_backlog = 2500   # for 10 GigE, use this   # net.core.netdev_max_backlog = 30000   

Starting with version 2.6.13, Linux supports pluggable congestion control algorithms . The congestion control algorithm used is set using the sysctl variable net.ipv4.tcp_congestion_control, which is set to Reno by default. (Apparently they decided that BIC was not quite ready for prime time.) The current set of congestion control options are:

  • reno: Traditional TCP used by almost all other OSes. (default)
  • bic: BIC-TCP
  • highspeed: HighSpeed TCP: Sally Floyd‘s suggested algorithm
  • htcp: Hamilton TCP
  • hybla: For satellite links
  • scalable: Scalable TCP
  • vegas: TCP Vegas
  • westwood: optimized for lossy networks

For very long fast paths, I suggest trying HTCP or BIC-TCP if Reno is not is not performing as desired. To set this, do the following:

 	sysctl -w net.ipv4.tcp_congestion_control=htcp

More information on each of these algorithms and some results can be found here .

Note: Linux 2.6.11 and under has a serious problem with certain Gigabit and 10 Gig ethernet drivers and NICs that support "tcp segmentation offload", such as the Intel e1000 and ixgb drivers, the Broadcom tg3, and the s2io 10 GigE drivers. This problem was fixed in version 2.6.12. A workaround for this problem is to use ethtool to disable segmentation offload:

     ethtool -K eth0 tso off
This will reduce your overall performance, but will make TCP over LFNs far more stable.

More information on tuning parameters and defaults for Linux 2.6 are available in the file ip-sysctl.txt, which is part of the 2.6 source distribution.

And finally a warning for both 2.4 and 2.6: for very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, and you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but clearly limits your total throughput. Another solution is to disable SACK.


Linux 2.2

If you are still running Linux 2.2, upgrade! If this is not possible, add the following to /etc/rc.d/rc.local

   echo 8388608 > /proc/sys/net/core/wmem_max     echo 8388608 > /proc/sys/net/core/rmem_max   echo 65536 > /proc/sys/net/core/rmem_default   echo 65536 > /proc/sys/net/core/wmem_default 
本站僅提供存儲服務(wù),所有內容均由用戶(hù)發(fā)布,如發(fā)現有害或侵權內容,請點(diǎn)擊舉報。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
Intel 10
Linux Kernel Tuning for C500k
高流量大并發(fā)Linux TCP 性能調優(yōu)
linux 調優(yōu)系列
Linux下性能分析工具匯總
Linux的sysctl 命令參數詳解
更多類(lèi)似文章 >>
生活服務(wù)
分享 收藏 導長(cháng)圖 關(guān)注 下載文章
綁定賬號成功
后續可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服

欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久