通常呐,咱们在访问一些HTTPS网站的时候,Chrome会告诉我们该连接使用TLS1.2(好像Google也在自家的一些服务部署了TLS1.3,TLS1.3还是草案呢吧),使用XXX加密和身份验证,使用XXX作为密钥交换机制。咱都知道HTTP是基于TCP的应用层协议,而TLS就好似套在了应用层和传输层之间的东西一样。盗用别人的话,以前的HTTP就是塑料管,一捅就漏(被篡改、劫持等),但是TLS就像是个金属外壳,这么一包上啊,就没那么容易漏水了。
而QUIC这玩意就更好玩了,它没管TCP的事,反而奇葩选择了UDP做为下一层协议,并且QUIC协议内置了TLS栈,实现了自己的传输加密层.
对QUIC的支持
支持QUIC的Web服务器
目前支持QUIC的服务器不是那么多,在Chromium的源码中有一个测试服务器,GitHub上有几个从Chromium抠下来的项目,还有go quic,大概都是属于pre-alpha阶段。但是几经搜寻,我找到了一个名为caddy的、用go写的Web服务器,这家伙提供实验性质的quic支持。话说这caddy有啥新特性呢?我去官网看了眼文档,给我的感觉是,caddy的目的是让网站变得更简单更易用,抢Nginx和Apache的饭碗并不是它的追求。而且这家伙配置文件很简洁,也没啥依赖,大概随用随走的意思!当然可能它也是为数不多的、开源的支持quic的Web服务器吧!
最令人眼睛一亮的是,这家伙自己就支持https,能签证书、续期咱就输个邮箱啥的,它就能帮咱去Let's Encrypt搞个证书回来,而且去SSLLabs还会得A,简直是神奇啊!而且这家伙还能直接从git push写博客,具体没仔细看
支持QUIC的浏览器
呐……Chrome、Chromium都没问题的!只要你的版本比较新,基本上就已经支持并默认开启QUIC啦。咱可以到chrome://net-internals/#quic
里看下,开头就会告知我们是否启用了QUIC的支持。如果没开启的话,就到chrome://flags
开启下这么一看,那就是服务端的支持了,那咱咋开启啊……当然是去caddy官网下载,然后启动就好了。话说想要成功有挺多条件的,
咱来谈谈准备条件吧。
首先,咱得有个域名,有个服务器,要是WordPress啥的还得准备好PHP、MySQL(MariaDB)啥的,对,最好再把证书也搞好(这一步不是必须的,因为Caddy可以为咱申请证书,但是我就当大家都搞好了)。我的运行环境是Ubuntu 16.04 64Bit,别的环境不敢保证。
好,今天有点废话,那咱正式开始踩坑吧……
咦?都有什么坑呀?我会告诉你,caddy官网下载的二进制如果开启quic会报错退出哟;我还会说,需要go1.8,包管理器的go1.6是不行滴!
如果你太懒,懒得自己弄,并且相信我的话,那就戳我下载我编译好的吧!
安装golang
这golang啊,是C语言他爹汤普森老爷子在Google搞的语言,据说很牛。怎奈Ubuntu 16.04的源里的go是1.6,然而caddy需要1.8,所以咱只能自己从官网搞个go回来了。为了偷懒,我们就不从源码编译了,直接下二进制就好了。想从源码编译golang的我不拦着哟!显示为go version go1.8 linux/amd64就可以了
- wget https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.8.linux-amd64.tar.gz
- #编辑bashrc
- vim ~/.bashrc
- #文件最后加入这么一行,保存退出
- export PATH=$PATH:/usr/local/go/bin
- #重新载入
- source ~/.bashrc
- go version
注意:如果你安装了1.8以下的golang,需要卸载,包管理器的就用类似
apt remove golang-go
之类的命令卸载,编译的就直接sudo rm –rf /usr/local/go
即可准备并编译caddy
啥gcc啊,make啊,源码啊啥的都得准备好,gcc之类的就不说了,包管理器就够了。咱从源码开始,我们就假设工作目录为/home/test
了、并且你已经切换到这个目录啦。我估计大家会在第二步的时候失败或者是太慢(尤其是当你的服务器在境内的时候),那么请跟我一起艹GFW吧;
- export GOPATH=/home/test
- go get github.com/mholt/caddy/caddy
- cd $GOPATH/src/github.com/mholt/caddy/caddy
- ./build.bash
可以考虑使用下面这一堆命令
在执行完最后一句的时候,咱应该会在当前目录下发现一个名为
- export GOPATH=/home/test
- wget http://7xvwrt.com1.z0.glb.clouddn.com/caddy170406.tar.bz2
- tar xf caddy170406.tar.bz2
- cd $GOPATH/src/github.com/mholt/caddy/caddy
- ./build.bash
caddy
的二进制程序。当然如果咱就想要个性,咱用./build.bash fuckGFW
那生成的二进制就叫fuckGFW
了。那些./build.bash candy.exe
的人还是让我去撞墙吧\(^o^)/~配置CaddyFile
假设咱把caddy放在了/home/caddy
,反正放哪都行,随便啦。咱就大概编辑个这么模样的文件,名为Caddyfile
假如要是有多个虚拟主机(vhost),就接着写
- :443 www.shemissed.me {
- root /home/wwwroot/www.shemissed.me
- fastcgi / /tmp/php-cgi.sock php
- log /home/wwwlogs/caddy.www.shemissed.me.log
- tls /etc/letsencrypt/live/www.shemissed.me/fullchain.pem /etc/letsencrypt/live/www.shemissed.me/privkey.pem
- }
类似这样的就行了。
- memory.shemissed.me {
- root /home/wwwroot/memory.shemissed.me
- fastcgi / /tmp/php-cgi.sock php
- log /home/wwwlogs/caddy.memory.shemissed.me.log
- tls /etc/letsencrypt/live/memory.shemissed.me/fullchain.pem /etc/letsencrypt/live/memory.shemissed.me/privkey.pem
- }
大家伙看着改咯。其中:
如果你想开启HSTS,那就加个
如果想自己设定cipher suite,那就看官网文档去,剩下的更多的特性,什么rewrite就自己发挥吧。
- Strict-Transport-Security "max-age=63072000; includeSubDomains; preload;"
/tmp/php-cgi.sock
是php-fpm的监听地址,去php-fpm.conf
里看下listen写的是啥就行了,有的人可能会是127.0.0.1:9000
。如果你后端使用的是jsp,也这样照猫画虎的改。运行caddy
咱把上上一部编译出来的二进制拷贝到/home/caddy
下,然后把这个路径加入到Path之中(export PATH=$PATH:/home/caddy
),之后咱caddy -conf /home/caddy/Caddyfile -port 443 -http2 –quic
,再去浏览器刷新几下看F12,就应该能看到QUIC了吧?有的时候死活不出来QUIC(尤其是PC版哦),俺也不知道是为啥。
等等,这就结束了吗?
当然不是,其实这坑还是非常多的,假如终端关掉了,那caddy不就被kill了嘛。咱可以加个&给丢后台,再用pidof写个脚本加入crontab检查运行状态。尽管如此,这样用也不是最佳实践。那正确的应该咋用呢?
把二进制放在
/usr/local/bin/caddy
把SysV风格的启动脚本放在
/etc/init.d/caddy
(这个脚本可以在官网的二进制包里找到)配置文件目录放这
/etc/caddy
证书放在子目录ssl 并且属主是www啥的创建配置文件
/etc/caddy/Caddyfile
用
service caddy start|stop|restart|reload|status
管理服务(/etc/init.d/candy restart
)还有啥,比如说
别用root运行
用non-login shell运行
提高ulimit文件描述符限制(
ulimit -n 8192
,或者在/etc/profile
中)反正一堆咯,自己读readme咯。
-----------------
quic的官网:https://www.chromium.org/quic
---------------------
灰度实验的效果也非常明显,其中 quic 请求的首字节时间 (rspStart) 比 http2 平均减少 326ms, 性能提升约 25%; 这主要得益于 quic 的 0RTT 和 1RTT 握手时间,能够更早的发出请求。
此外 quic 请求发出的时间 (reqStart) 比 h2 平均减少 250ms; 另外 quic 请求页面加载完成的时间 (loadEnd) 平均减少 2s,由于整体页面比较复杂, 很多其它的资源加载阻塞,导致整体加载完成的时间比较长约 9s,性能提升比例约 22%。
既然大厂都已经发车,我司也就可以考虑跟进了。稳妥起见,决定先在自己的博客开启QUIC,然后再逐步在线上业务进行推广。
方案概览
方案非常简单:不支持QUIC的浏览器依旧通过nginx tcp 443访问;支持QUIC的浏览器通过caddy udp 443访问。
由于nginx近期没有支持QUIC的计划, 作为一名gopher, 因此这里选择caddy作为QUIC的反向代理。后面会介绍caddy的具体安装和配置方法。
对于支持QUIC的浏览器来说,第一次访问支持QUIC的网站时,会有一次服务发现
的过程。服务发现的流程在QUIC Discovery
有详细介绍。概括来说,主要有以下几步:
- 通过TLS/TCP访问网站,浏览器检查网站返回的http header中是否包含
alt-svc
字段。 - 如果响应中含有头部:
alt-svc: 'quic=":443"; ma=2592000; v="39"'
,则表明该网站的UDP 443端口支持QUIC协议,且支持的版本号是draft v39; max-age为2592000秒。 - 然后,浏览器会发起QUIC连接,在该连接建立前,http 请求依然通过TLS/TCP发送,一旦QUIC连接建立完成,后续请求则通过QUIC发送。
- 当QUIC连接不可用时,浏览器会采取5min, 10min的间隔检查QUIC连接是否可以恢复。如果无法恢复,则自动回落到TLS/TCP。
这里有一个比较坑的地方:对于同一个域名,TLS/TCP和QUIC必须使用相同的端口号才能成功开启QUIC。没有什么特殊的原因,提案里面就是这么写的。具体的讨论可以参见Why MUST a server use the same port for HTTP/QUIC?
从上面QUIC的发现过程可以看出,要在网站开启QUIC,主要涉及两个动作:
- 配置nginx, 添加
alt-svc
头部。 - 安装和配置QUIC反向代理服务。
配置nginx, 添加alt-svc
头部
一行指令搞定:
安装QUIC反向代理服务器caddy
上面我们提到对于同一个域名,TLS/TCP和QUIC必须使用相同的端口号才能成功开启QUIC。然而,caddy服务器的QUIC特性无法单独开启,必须与TLS一起开启,悲剧的是TLS想要使用的TCP 443端口已经被nginx占用了?
caddy 服务配置文件/conf/blog.conf
:
开启Chrome浏览器QUIC特性
在chrome://flags/中找到Experimental QUIC protocol
, 设置为Enabled
. 重启浏览器生效。
测试QUIC开启状态:
重新访问本站https://mydomain.com, 然后在浏览器中打开:chrome://net-internals/#quic。如果你看到了QUIC sessins,则开启成功.
QUIC in the Wild
It realizes significant speedups, due to round trip reductions and other advantages, over the HTTPS/TCP/IP alternative. As the post details, we suggest others in ad tech become familiar with it to benefit from the same advantages Google is seeing. 05/10/17 - 3:45pm PDT
When we inspected web page traffic via chrome://net-internals, we discovered that QUIC requests were and still are being used for a majority of Google’s ad domains
What is QUIC?
Originally announced in 2013, QUIC (Quick UDP Internet Connections) is an experimental network protocol, which runs on top of the UDP protocol and is usually requested through port 443 with an Alternative Service HTTP request header flag (example:alt-svc:quic="googleads.g.doubleclick.net:443"
). A QUIC working group has been established to standardize the protocol; QUIC is currently still considered experimental.From a high level, QUIC requests pack several round trips into a single, one-way request, including the security (TLS) handshake. Google’s diagram from their 2015 blog post on the topic helps illustrate the round trip savings:
- QUIC is enabled by default in Google’s Chrome browser and underlying Chromium open source browser code. As of March 2017, Chrome accounts for 58.9% of users browsing the web.
- Brave blocks QUIC requests. QUIC is an opt-in feature in Opera,
and is currently not available in other Firefox, Edge, and Safari. HTTPS
requests containing the
alt-svc: quic=":443"
response header fall back to traditional TCP connections in other browsers, or when QUIC fails in Chrome. - QUIC use has not flown completely under the radar. Google’s April 2015 blog post on QUIC mentioned that roughly 50% of Google traffic was being requested from Chrome/Chromium with QUIC. Missing from the announcement and other QUIC documentation is any mention of QUIC usage in Google’s Doubleclick ad requests, or in Google Analytics tracking requests.
How can I review QUIC sessions and requests?
QUIC runs on top of the UDP protocol. QUIC requests are often made through the same port (443) that is used for TCP requests. Aside from some corporate firewalls that block UDP requests by their protocol number, making QUIC requests to port 443 helps requests get through firewalls configured to allow TCP requests to that port, independent of the protocol number in the IP header. This is a clever hack that dramatically reduces adoption friction, by avoiding the need for firewall reconfiguration in most cases.- Chrome shows QUIC requests in Chrome Developer Tools. If you want to distinguish between QUIC and HTTP in the Network interface, right click to select and reveal the Protocol column.
- QUIC requests are visible in the
chrome://
internal browser URLs, and of course in WireShark and other lower-level packet sniffing tools.
Tips for observing QUIC traffic
The easiest way to capture and observe QUIC traffic in detail is within thechrome://net-internals
interface. Some chrome://net-internals
shortcuts are included below for reference. - To view the QUIC settings and session connections within the dedicated QUIC panel, enter the following address in the URL bar in Chrome: chrome://net-internals#quic
- To view QUIC session requests in the Events panel, enter the following address in the URL bar in Chrome: chrome://net-internals/#events&q=type:QUIC_SESSION
- To access the alt-svc panel to view domains that contain the
quic :443
, Alternative Service HTTP response header, enter the following address in the URL bar in Chrome: chrome://net-internals/#alt-svc - To export a JSON log that includes QUIC request traffic, enter the following address in the URL bar in Chrome: chrome://net-internals/#export
- The HTTP/2 and SPDY indicator Chrome extension provides a shortcut to the
chrome://net-internals
Events panel. - Within the Network section of the Chrome Developer Tools,
users can right click to reveal the Protocol column,
which will show
http/2+quic/36
as the request protocol. This column is not displayed with default settings in Developer Tools.
- Enter chrome://flags/ into the URL bar in Chrome.
- Locate the Experimental QUIC protocol flag.
- Click to expand the drop down menu and select Disabled
QUIC and Google ad requests
When we inspected web page traffic viachrome://net-internals
,
we discovered that QUIC requests were and still are being used for a
majority of Google’s ad domains, including domains involved with bidding
such as the one below: 14419: QUIC_SESSION
bid.g.doubleclick.net
Start Time: 2017-04-20 00:28:09.220
t= 95972 [st= 0] +QUIC_SESSION [dt=340129+]
--> cert_verify_flags = 6
--> host = "bid.g.doubleclick.net"
--> port = 443
--> privacy_mode = false
--> require_confirmation = false
t= 96028 [st= 56] QUIC_CHROMIUM_CLIENT_STREAM_SEND_REQUEST_HEADERS
--> :authority: bid.g.doubleclick.net
:method: GET
:scheme: https
accept: */*
accept-encoding: gzip, deflate, sdch, br
accept-language: en-US,en;q=0.8
cookie:id=224683e7072a008a||t=1492673205|et=730|cs=002213fd48a93fd86aff9b0172; IDE=AHWqTUnVMhtPcKMXwHTn_nS5yFVKx1XEjcVJo18Kg9O3XAc2HbLk0EBHtQ; DSID=NO_DATA
referer: https://tpc.googlesyndication.com/safeframe/1-0-7/html/container.html
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36
--> quic_priority = 1
--> quic_stream_id = 5
Primary Google domains observed opening QUIC sessions
Google domains that made requests within QUIC sessions
Alternate Service Mappings
https://pagead2.googlesyndication.com
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:33:40,quic :443, expires 2017-05-20 00:33:40
https://tpc.googlesyndication.com
quic :443, expires 2017-05-20 00:33:22
https://s0.2mdn.net
quic :443, expires 2017-05-20 00:33:22
https://cm.g.doubleclick.net
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:33:22,quic :443, expires 2017-05-20 00:33:22
https://www.google.com
quic :443, expires 2017-05-20 00:33:22
https://adx.g.doubleclick.net
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:33:22,quic :443, expires 2017-05-20 00:33:22
https://securepubads.g.doubleclick.net
quic :443, expires 2017-05-20 00:33:22
https://ad.doubleclick.net
quic :443, expires 2017-05-20 00:33:19
https://googleads4.g.doubleclick.net
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:33:03,quic :443, expires 2017-05-20 00:33:03
https://content.googleapis.com
quic :443, expires 2017-05-20 00:33:02
https://apis.google.com
quic :443, expires 2017-05-20 00:33:01
https://www.googletagservices.com
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:29:26,quic :443, expires 2017-05-20 00:29:26
https://accounts.google.com
quic :443, expires 2017-05-20 00:33:01
https://www.google-analytics.com
quic :443, expires 2017-05-20 00:33:00
https://ade.googlesyndication.com
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:32:58,quic :443, expires 2017-05-20 00:32:58
https://googleads.g.doubleclick.net
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:32:54,quic :443, expires 2017-05-20 00:32:54
https://www.gstatic.com
quic :443, expires 2017-05-20 00:29:55
https://s1.2mdn.net
quic :443, expires 2017-05-20 00:32:43
https://stats.g.doubleclick.net
quic :443, expires 2017-05-20 00:32:40
https://pubads.g.doubleclick.net
quic :443, expires 2017-05-20 00:32:06
https://static.doubleclick.net
quic :443, expires 2017-05-20 00:32:07
https://www.youtube.com
quic :443, expires 2017-05-20 00:32:07
https://video-ad-stats.googlesyndication.com
quic :443, expires 2017-05-20 00:32:07
https://s.ytimg.com
quic :443, expires 2017-05-20 00:32:06
https://google-analytics.com
quic :443, expires 2017-05-20 00:32:03
https://imasdk.googleapis.com
quic :443, expires 2017-05-20 00:32:00
https://fonts.googleapis.com
quic :443, expires 2017-05-20 00:31:55
https://ssl.google-analytics.com
quic :443, expires 2017-05-20 00:31:58
https://www.googletagmanager.com
quic :443, expires 2017-05-20 00:31:55
https://fonts.gstatic.com
quic :443, expires 2017-05-20 00:31:55
https://p4-fbm4tfy4du3vk-rsg77dtzm53vwr6k-if-v6exp3-v4.metric.gstatic.com
quic :443, expires 2017-05-20 00:30:57
https://ssl.gstatic.com
quic :443, expires 2017-05-20 00:27:03
https://bid.g.doubleclick.net
quic :443, expires 2017-05-20 00:31:19
https://p4-fbm4tfy4du3vk-rsg77dtzm53vwr6k-854535-i2-v6exp3.ds.metric.gstatic.com
quic :443, expires 2017-05-20 00:31:08
https://p4-fbm4tfy4du3vk-rsg77dtzm53vwr6k-854535-i1-v6exp3.v4.metric.gstatic.com
quic :443, expires 2017-05-20 00:31:08
https://ajax.googleapis.com
quic :443, expires 2017-05-20 00:31:02
https://www.googleadservices.com
quic googleads.g.doubleclick.net:443, expires 2017-05-20 00:27:45,quic :443, expires 2017-05-20 00:27:45
https://plus.google.com
quic :443, expires 2017-05-20 00:27:25
https://clients2.google.com
quic :443, expires 2017-05-20 00:26:24
Examples of Doubleclick QUIC requests
Example of a TCP HTTP GET Request URL with the
alt-svc: quic=”443”
HTTP response header What does this mean for the ad ecosystem?
It would be helpful for Doubleclick to clarify how they’re using QUIC for ads and tracking. With DoubleClick’s dominance in the ad market, publishers and advertisers need to have a clear understanding and documentation regarding which protocols are being used for ad and tracking requests.We want to emphasize that we’re fans of QUIC as a protocol, and we hope it is widely adopted. But from our contacts in ad tech, it appears that QUIC is not understood or used yet. While Doubleclick is not obligated to share its QUIC usage details, we hope that it will do so to increase QUIC adoption across the ad-tech ecosystem.
Additional QUIC info and resources:
- Google: QUIC FAQ for geeks
- Google: Flow control in QUIC
- Google: QUIC Wire Layout Specification
- Google: QUIC Crypto
- Google: Life of a URL Request
- Chromium Blog: A QUIC update on Google’s experimental transport
- Chromium Blog: Experimenting with QUIC
- ma.ttias.be: Google’s QUIC protocol: moving the web from TCP to UDP
- Digiday: WTF is advertising arbitrage?
- bugs.chromium.org: Google’s DFP ads not firing when QUIC protocol used
- Kate Pearce: HTTP/2 and QUIC: Teaching good protocols to do bad things (Black Hat 2016)
- from https://blog.brave.com/quic-in-the-wild/
- ------------------------
- SPDY, an open networking protocol developed primarily at Google for transporting web content is currently supported by default on Internet Explorer and Chrome web browsers.The core developers of SPDY have been involved in the development of HTTP/2. As of February 2015, Google has announced that following the recent final ratification of the HTTP/2 standard, support for SPDY would be deprecated, and that support for SPDY will be withdrawn completely in 2016.Let me show you how to disable SPDY in Internet Explorer and QUIC in Chrome.
A QUIC Review
QUIC (Quick UDP Internet Connections, pronounced quick) is an experimental transport layer network protocol designed at Google and implemented in 2012/ 2013.QUIC supports a set of multiplexed connections between two endpoints over User Datagram Protocol (UDP), and provides security protection equivalent to TLS/SSL. The concept is very straight forward; to create a UDP based protocol that has reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion.QUIC's main goal is to improve the perceived performance of connection-oriented web applications that are currently using TCP.Improving TCP is a long-term goal for Google and QUIC is positioning itself to be nearly equivalent as an independent TCP connection, but with much reduced latency. It is also trying to be better than SPDY-like stream-multiplexing support.SPDY A Short History
SPDY (pronounced speedy) is an open networking protocol developed primarily at Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security. SPDY achieves reduced latency through compression, multiplexing, and prioritization, although this depends on a combination of network and website deployment conditions. The name "SPDY" is a trademark of Google and is not an acronym.Throughout the process, the core developers of SPDY have been involved in the development of HTTP/2. As of February 2015, Google has announced that following the recent final ratification of the HTTP/2 standard, support for SPDY would be deprecated, and that support for SPDY will be withdrawn completely in 2016.Since SPDY is supported by default on Internet Explorer and Chrome (I’m not sure about other browsers), I would like to show you how to disable either one.There have been reports where QUIC may have issues with NAT’ing, proxies or traversing certain firewall configurations, so being able to turn it off should help when troubleshooting。from https://www.garlandtechnology.com/blog/disable-quic-spdy-----------------------------------相关帖子:利用Caddy,非常简单的部署反向代理/镜像(支持自签SSL证书)
相关帖子:http://briteming.blogspot.com/2015/04/quicyoutube.html - https://briteming.blogspot.com/2017/07/a-quic-implementation-in-pure-go.html
- https://briteming.blogspot.com/2017/09/quic.html
QUIC
QUIC是Quick UDP Internet Connections的缩写,读作quick。由Google开发,概要设计文档放在google docs https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFV-ev2jRFUoVD34/edit,还在不断更新。传输格式的详细设计文档放在https://docs.google.com/document/d/1WJvyZflAO2pq77yOLbp9NsGjC1CHetAXV8I0fQe-B_U/edit。
概要设计文档从TCP/UDP特性、网络安全等考虑出发,做了非常多设计思路方面的论述,开头就阐述了SPDY的4个缺点:
- 单个包(packet)丢失会阻塞整个流(stream)。
- TCP避免拥堵的机制做的不好,导致带宽降低和序列化的等待时间开销。
- TLS会话重连的等待时间开销。握手机制带来额外的Round Trip。
- TLS解密的开销。先到的包必须等后面的包到来才能解密。
可以认为QUIC是为了解决SPDY在TCP遇到的瓶颈而在UDP上做探索所设计的方案。参考SPDY来理解,可认为QUIC的传输内容分两层,高层类似SPDY,低层是在UDP上模仿实现TCP的面向连接特性和可靠性并加入类似TLS的加密过程。
QUIC的文档还算未完成的状态,且Chromium的实现代码也在完善中,这还是个试验性的半成品,没有性能比较的数据。只浅浅研究即止,不深入了。
---------------
HTTP/3 (原名QUIC) 的反面看法
這篇整理了 HTTP/3 (QUIC) 的反面看法,算是常見的疑慮都列出來了:「QUIC and HTTP/3 : Too big to fail?!」。
其實大多都是使用 UDP 而導致的問題:
- 因為 UDP 導致 firewall 可能沒開,以及可能會需要等 timeout 走回 TCP 的問題。
- 因為 UDP 變成很多事情在 userland 處理,而導致的 CPU 使用率比使用 TCP 的 TLS 1.2/1.3 高很多。
- 因為 UDP 導致 amplification attack 的安全性問題,以及對應的 workaround 產生的頻寬議題。
- 由於 UDP 會需要自己控制擁塞,等於是在 UDP 上面又重做了一次 TCP congestion algorithm,而且因為重作所以得考慮與 TCP 搶資源的公平性。
整篇文章算是整理了一般對 HTTP/3 的疑慮,之後如果有進展的話,可以再拿出來當 checklist 再確認有哪些有改善.
---------------
QUIC and HTTP/3 : Too big to fail?!
The new QUIC and HTTP/3 protocols are coming and they are the bee’s knees! Combining lessons and best practices from over 30 years of networking, the new protocol stack offers major improvements to performance, privacy, security and flexibility.
Much has been said about the potential benefits of QUIC, most of it based on Google’s experience with an early version of the protocol. However, its potential shortcomings are rarely talked about and little is yet known about the properties of the upcoming, standardized versions (as they are still under active development). This post takes a (nuanced) “devil’s advocate” viewpoint and looks at how QUIC and HTTP/3 might still fail in practice, despite the large amount of current enthusiasm. In all fairness, I will also mention counter arguments to each point and let the reader make up their own mind, hopefully after plenty of additional discussion.
Note: if you’re not really sure what QUIC and HTTP/3 are in the first place, it’s best to get up to speed a bit before reading this post, which assumes some familiarity with the topic. Some resources that might help you with that:
- Mattias Geniar’s blog post
- Cloudflare’s write-up
- Robert Graham’s comments
- Daniel Stenberg (@bagder)’s HTTP/3 explained
- Mailing list explanation and blog post by Patrick McManus
- And my own talk from DeltaVConf this year
1. End-to-end encrypted UDP you say?
One of the big selling points of QUIC is its end-to-end encryption. Where in TCP much of the transport-specific information is out in the open and only the data is encrypted, QUIC encrypts almost everything and applies integrity protection (see Figure X). This leads to improved privacy and security and prevents middleboxes in the network from tampering with the protocol. This last aspect is one of the main reasons for the move to UDP: evolving TCP was too difficult in practice because of all the disparate implementations and parsers.
Figure X: simplified conceptual representation of the (encrypted) fields in TCP and QUIC
Network operators and the spin bit
The downside is that network operators now have much less to work with to try and optimize and manage their network. They no longer know if a packet is an acknowledgment or a re-transmit, cannot self-terminate a connection and have no other way from impacting congestion control/send rate than to drop packets. It is also more difficult to assess for example the round-trip-time (RTT) of a given connection (which, if rising, is often a sign of congestion or bufferbloat).
There has been much discussion about adding some of these signals back into a visible-on-the-wire part of the QUIC header (or using other means), but the end result is that just a single bit will be exposed for RTT measurement: the “spin” bit. The concept is that this bit will change value about once every round trip, allowing middleboxes to watch for the changes and estimate RTTs that way, see Figure Y (more bits could lead to added resolution etc., read this excellent paper). While this helps a bit, it still limits the operators considerably, especially with initial signals being that Chrome and Firefox will not support the spin bit. The only other option QUIC will support is “Explicit Congestion Notification“, which uses flags at the IP-level to signal congestion.
Figure Y: A simple illustration of the working of the spin bit (source)
UDP blocking and alt-svc with fallbacks
I don’t know about you, but if I were a network operator (or nefarious dictatorship), I would be sorely tempted to just block QUIC wholesale if I’m doing any type of TCP optimization or use special security measures. It wouldn’t even be that difficult for web-browsing: nothing else runs on UDP:443 (whereas blocking TCP:443 would lead to much mayhem). While deploying QUIC, google actually looked at this, to know how many networks just blocked UDP/QUIC already. They (and other research) found 3-5% of networks currently do not allow QUIC to pass. That seems fine, but these figures (probably) don’t include a lot of corporate networks, and the real question is: will it remain that way? If QUIC gets bigger, will (some) networks not start actively blocking it (at least until they update their firewalls and other tools to better deal with it?). “Fun” anecdote: while testing our own QUIC implementation’s public server (based in Belgium) with the excellent quic-tracker conformance testing tool, most of the tests suddenly started failing when the tool moved to a server in Canada. Further testing confirmed some IP-paths are actively blocking QUIC traffic, causing the test failures.
The thing is that blocking QUIC (e.g., in a company’s firewall) wouldn’t even break anything for web-browsing end users; sites will still load. As browsers (and servers!) have to deal with blocked UDP anyway, they will always include a TCP-based fallback (in practice, Chrome currently even races TCP and QUIC connections instead of waiting for a QUIC timeout). Servers will use the alt-svc mechanism to signal QUIC support, but browsers can only trust that to a certain extent because a change of network might suddenly mean QUIC becomes blocked. QUIC-blocking company network administrators won’t get angry phone calls from their users and will still be able to have good control over their setup, what’s not to like? They also won’t need to run and maintain a separate QUIC/H3 stack next to their existing HTTP(/2) setup..
Finally, one might ask: why would a big player such as Google then want to deploy QUIC on their network if they loose flexibility? In my assessment, Google (and other large players) are mostly in full control of (most of) their network, from servers to links to edge points-of-presence, and have contracts in place with other network operators. They know more or less exactly what’s going on and can mitigate network problems by tweaking load balancers, routes or servers themselves. They can also do other shenanigans, such as encode information in one of the few non-encrypted fields in QUIC: the connection-ID. This field was explicitly allowed to be up to 18 bytes long to allow encoding (load-balancing) information inside. They could also conceivably add additional headers to their packets, stripping them off as soon as traffic leaves the corporate network. As such, the big players lose a bit, but not much. The smaller players or operators of either only servers or only the intermediate networks stand to lose more.
Counterarguments
- End-users will clamor for QUIC to be allowed because of the (performance) benefits it provides
- QUIC doesn’t need performance enhancing middleboxes anyway because it has better built-in congestion control and faster connection setup
- Most current networks don’t block it, little chance they will start without a major reason/incident
- Running an QUIC+HTTP/3 stack next to TCP+HTTP/2 will be as easy as adding a couple of lines to a server config
2. CPU issues
As of yet, QUIC is fully implemented in user-space (as opposed to TCP, which typically lives in kernel-space). This allows fast and easy experimentation, as users don’t need to upgrade their kernels with each version, but also introduces severe performance overheads (mainly due to user-to-kernel-space communication) and potential security issues.
In their seminal paper, Google mentions their server-side QUIC implementation uses about 2x as much CPU as the equivalent TCP+TLS stack. This is already after some optimizations, but not full kernel bypass (e.g., with DPDK or netmap). Let me put that another way: they would need roughly twice the server hardware to serve the same amount of traffic! They also mention diminished performance on mobile devices, but don’t give numbers. Luckily, another paper describes similar mobile tests and finds that Google’s QUIC is mostly still faster than TCP but “QUIC’s advantages diminish across the board”, see Figure Z. This is mainly because QUIC’s congestion control is “application limited” 58% of the time (vs 7% on the desktop), meaning the CPU simply cannot cope with the large amount of incoming packets.
Figure Z: QUIC vs TCP performance. Red = QUIC better, Blue = TCP better. (source)
This would suggest QUIC provides most advantages over TCP in situations with bad networks and high-end devices. Sadly, bad networks are often coupled with bad devices, and the median global network and device are both quite slow. This means a lot of the network gains from QUIC are potentially (largely) undone by the slower hardware. Combine this with the fact that webpages themselves are also asking more and more CPU for themselves (leading to one web performance guru claiming JavaScript perf is more important than the network nowadays), and you’ve got quite the conundrum.
IoT and TypeScript
One of the oft-touted use cases for QUIC is in Internet-of-Things (IoT) devices, as they often need intermittent (cellular) network access and low-latency connection setup, 0-RTT and better loss resilience are quite interesting in those cases. However, those devices often also have quite slow CPUs.. There are many issues where QUIC’s designers mention the IoT use case and how a certain decision might impact this, though as far as I know there is no stack that has been tested on such hardware yet. Similarly, many issues mention taking into account a hardware QUIC implementation, but at my experience level it’s unclear if this is more wishful thinking and handwaving rather than a guarantee.
I am a co-author of a NodeJS QUIC implementation in TypeScript, called Quicker. This seems weird given the above, and indeed, most other stacks are in C/C++, Rust or Go. We chose TypeScript specifically to help assess the overhead and feasability of QUIC in a scripting language and, while it’s still very early, it’s not looking too well for now, see Figure A.
Figure A: Quicker (TypeScript) vs ngtcp2 (C/C++) CPU and memory usage (source)
Counterarguments
- QUIC will move into kernel and/or hardware in the future
- TCP+TLS overhead account for almost nothing when compared to other overheads (e.g., PHP execution, database access). QUIC taking twice that is negligible.
- Current numbers are for google’s QUIC, IETF QUIC can/will be different
- (Client) hardware will become faster
- The overhead is not that high as to be unmanageable
- Even with a massive overhead, Google decided to deploy QUIC at scale. This indicates the benefits (severely) outweigh the costs. It would seem better web performance indeed leads to massively improved revenues, who knew?
- TCP also has a place in IoT
- “I’ve looked at your TypeScript code Robin, and it’s an ungodly mess. A competent developer could make this way faster”
3. 0-RTT usefulness in practice
Another major QUIC marketing feature (though it’s actually from TLS 1.3) is 0-RTT connection setup: your initial (HTTP) request can be bundled with the first packet of the handshake and you can get data back with the first reply, superfast!
However, there is a “but” immediately: this only works with a server that we’ve previously connected to with a normal, 1-RTT setup. 0-RTT data in the second connection is encrypted with something called a “pre-shared secret” (contained in a “new session ticket”), which you obtain from the first connection. The server also needs to know this secret, so you can only 0-RTT connect to that same server, not say, a server in the same cluster (unless you start sharing secrets or tickets etc.). This means, again, that load balancers should be smart in routing requests to correct servers. In their original QUIC deployment, Google got this working in 87% (desktop) – 67% (mobile) of resumed connections, which is quite impressive, especially since they also required users to keep their original IP addresses.
There are other downsides as well: 0-RTT data can suffer from “replay attacks”, where the attacker copies the initial packet and sends it again (several times). Due to integrity protection, the contents cannot be changed, but depending on what the application-level request carries, this can lead to unwanted behaviour if the request is processed multiple times (e.g., POST bank.com?addToAccount=1000). Thus, only what they call “idempotent” data can be sent in 0-RTT (meaning it should not permanently change state, e.g., HTTP REST GET but not PUT). Depending on the application, this can severely limit the usefulness of 0-RTT (e.g., a naive IoT sensor using 0-RTT to POST sensor data could, conceptually, be a bad idea).
Lastly, there is the problem of IP address spoofing and the following UDP amplification attacks. In this case, the attacker pretends to be the victim at IP a.b.c.d and sends a (small) UDP packet to the server. If the server replies with a (much) larger UDP packet to a.b.c.d, the attacker needs much less bandwidth to generate a large attack on the victim, see Figure B. To prevent this, QUIC adds two mitigations: the client’s first packet needs to be at least 1200 bytes (max practical segment size is about 1460) and the server MUST NOT send more than three times that in response without receiving a packet from the client in response (thus “validating the path”, proving the client is not a victim of an attack). So just 3600-4380 bytes, in which the TLS handshake and QUIC overhead is also included, leaves little space for an (HTTP) response (if any). Will you send the HTML <head>
? headers? Push something? Will it matter? This exact question is one of the things I’m looking forward to investigating in-depth.
Figure B: UDP amplification attack (source)
The final nail in QUIC’s coffin is that TCP + TLS 1.3 (+ HTTP/2) can also use 0-RTT with the TCP “Fast Open” option (albeit with the same downsides). So picking QUIC just for this feature is (almost) a non-argument.
Counterarguments
- HTTP servers should already be immune to replays
- Replay prevention is easy when adding something like a timestamp or sequence number, since attackers cannot change the contents
- 1-RTT is still much better than the current 3-4 RTT with TCP + TLS 1.2
- Servers can just ignore the RFCs artificial UDP amplification limit of 3600 bytes if performance is needed. Bigger players could deploy other measures, like keeping track of how much is sent to a given IP at once, take into account server load or total outgoing bandwidth. Smaller players might not have that luxury though.
- There is some evidence that larger initial congestion windows do not have a big overall impact on HTTP/2 over TCP so the limit might not matter much in practice for H3
- TCP Fast Open is currently unusable in practice. Mozilla even gave up on trying to enable it in Firefox. (Counter-counterargument: over time, TCP Fast Open support will also increase)
4. QUIC v3.5.66.6.8.55-Facebook
As opposed to TCP, QUIC integrates a full version negotiation setup, mainly so it can keep on evolving easily without breaking existing deployments. The client uses its most preferred supported version for its first handshake packet. If the server does not support that version, it sends back a Version Negotiation packet, listing supported versions. The client picks one of those (if possible) and retries the connection. This is needed because the binary encoding of the packet can change between versions.
Every RTT is one too many
As follows from the above, each version negotiation takes 1 RTT extra. This wouldn’t be a problem if we have a limited set of versions, but the idea seems to be that there won’t be just for example 1 official version per-year, but a slew of different versions. One of the proposals was (is?) even to use different versions to indicate support for a single feature (the previously mentioned spin bit). Another goal is to have people use a new version when they start experimenting with different, non-standardized features. This all can (will?) lead to a wild-wild-west situation, where every party starts running their own slighty different versions of QUIC, which in turn will increase the amount of instances that version negotiation (and the 1 RTT overhead) occurs. Taking this further, we can imagine a dystopia where certain parties refuse to move to new standardized versions, since they consider their own custom versions superior. Finally, there is the case of drop-and-forget scenarios, for example in the Internet-of-Things use case, where updates to software might be few and far between.
A partial solution could potentially be found in the transport parameters. These values are exchanged as part of the handshake and could be used to enable/disable features. For example, there is already a parameter to toggle connection migration support. However, it’s not yet clear if implementers will lean to versioning or adding transport parameters in practice (though I read more of the former).
It seems a bit strange to worry about sometimes having a 1-RTT version negotiation cost, but for a protocol that markets a 0-RTT connection setup, it’s a bit contradictory. It is not inconceivable that clients/browsers will choose to always attempt the first connection at the lowest supported QUIC version to minimize the 1-RTT overhead risk.
Counterarguments
- Browsers will only support “main” versions and as long as your server supports those you should be ok
- Parties that run their own versions will make sure both clients and servers support those or make the 1-RTT trade-of decision consciously
- Clients will cache the version lists servers support and choose a supported version from the second connection onward
- Versions used for toggling individual features can easily share a single codebase. Servers will be smart enough not to launch negotiation if they don’t support that exact version, if they know the on-the-wire image is the same and they can safely ignore the missing feature
- Servers always send back their full list of supported versions in their transport parameters, even without version negotation. From the second connection onward, the client can select the highest mutually supported setup.
5. Fairness in Congestion Control
The fact that QUIC is end-to-end encrypted, provides versioning and is implemented in user space provides a never-before seen amount of flexibility. This really shines when contemplating using different congestion control algorithms (CCAs). Up until now, CCAs were implemented in the kernel. You could conceivably switch which one you used, but only for your entire server at the same time. As such, most CCAs are quite general-purpose, as they need to deal with any type of incoming connection. With QUIC, you could potentially switch CCA on a per-connection basis (or do CC across connections!) or at least more easily experiment with different (new) CCAs. One of the things I want to look at is using the NetInfo API to get the type of incoming connection, and then change the CCA parameters based on that (e.g., if you’re on a gigabit cable, my first flight will be 5MB instead of 14KB, because I know you can take it).
Calimero
The previous example clearly highlights the potential dangers: if anybody can just decide what to do and tweak their implementations (without even having to recompile the kernel- madness!), this opens up many avenues for abuse. After all, an important part of congestion control is making sure each connection gets a more or less equal share of the bandwidth, a principle called fairness. If some QUIC servers start deploying a much more aggressive CCA that grabs more than its equal share of bandwidth, this will slow down other, non-QUIC connections and other QUIC connections that use a different CCA.
Nonsense, you say! Nobody would do that, the web is a place of gentlepeople! Well… Google’s version of QUIC supports two congestion control algorithms: TCP-based CUBIC and BBR. There is some conflicting information, but at least some sources indicate their CCA implementations are severely unfair to “normal” TCP. One paper, for example, found that QUIC+CUBIC used twice the bandwidth of 4 normal TCP+CUBIC flows combined. Another blogpost shows that TCP+BBR could scoop up two-thirds of the available bandwidth, see Figure C. This is not to say that Google actively tries to slow down other (competing) flows, but it shows rather well the risks with letting people easily choose and tweak their own CCAs. Worst case, this can lead to an “arms race” where you have to catch up and deploy ever more aggressive algorithms yourself, or see your traffic drowned in a sea of QUIC packets. Yet another potential reason for network operators to block or severely hamper QUIC traffic.
Figure C: BBR vs CUBIC fairness (both on TCP) (source)
Another option is of course that a (small) implementation error causes your CCA to perform suboptimally, slowing down your own traffic. Seeing as all these things have to be re-implemented from scratch, I guarantee these kinds of bugs will pop up. Since congestion control can be very tricky to debug, it might be a while before you notice. For example, when working on their original QUIC implementation, Google uncovered an old TCP CUBIC bug and saw major improvements for both TCP and QUIC after fixing it.
Counterarguments
- Networks have mitigations and rate limiting in place to prevent this kind of abuse
- Congestion control manipulation has been possible in TCP since the start and seems not to occur to a problematic degree in practice
- There is no evidence of large players (e.g., Youtube, Netflix) employing this type of strategy to make sure their traffic gets priority
- Really dude, this again? What do you think browsers were doing when they started opening 6 TCP connections per domain?
6. Too soon and too late
QUIC has been around for quite a long time: starting as a Google experiment in 2012 (gQUIC), it was passed on to the IETF for standardization (iQUIC) in 2015 after a decent live deployment at scale, proving its potential. However, even after 6 years of design and implementation, QUIC is far from (completely) ready. The IETF deadline for v1 had already been extended to November 2018 and has now been moved again to July 2019. While most large features have been locked down, even now changes are being made that lead to relatively major implementation iterations. There are over 15 independent implementations, but only a handful implement all advanced features at the transport layer. Even fewer (two at the moment) implement a working HTTP/3 mapping. Since there are major low-level differences between gQUIC and iQUIC, it is as of yet unclear if results from the former will hold true in the latter. This means the theoretical design is maybe almost finished, but implementations remain relatively unproven (though Facebook claims to already be testing QUIC+HTTP/3 for some internal traffic). There is also not a single (tested) browser-based implementation yet, though Apple, Microsoft, Google and Mozilla are working on IETF QUIC implementations and we ourselves have started a POC based on Chromium.
Too (much too) soon
This is problematic because the interest in QUIC is rising, especially after the much talked-about name-change from HTTP-over-QUIC to HTTP/3. People will want to try it out as soon as possible, potentially using buggy and incomplete implementations, in turn leading to sub-par performance, incomplete security and unexpected outages. People will in turn want to debug these issues, and find that there are barely any advanced tools or frameworks that can help with that. Most existing tools are tuned for TCP or don’t even look at the transport layer and QUIC’s layer-spanning nature will make debugging cross-layer (e.g., combining 0-RTT with H3 server push) and complex (e.g., multipath, forward error correction, new congestion control) issues difficult . This is in my opinion an extensive issue; so extensive that I’ve written a full paper on it, which you can read here. In it, I advocate for a common logging format for QUIC which allows creating a set of reusable debugging and visualization tools, see Figure D.
Figure D: A per-stream visualization of a QUIC connection helps see bandwidth distribution and flow control across resources (source)
As such, there is a risk that QUIC and its implementations will not be ready (enough) by the time the people want to start using it, meaning the “Trough of Disillusionment” may come too early and broad deployment will be delayed years. In my opinion, this can also be seen in how CDNs are tackling QUIC: Akamai, for example, decided not to wait for iQUIC and instead has been testing and deploying gQUIC for a while. LiteSpeed burns at both ends of the candle, supporting both gQUIC and pioneering iQUIC. On the other hand though, Fastly and Cloudflare are betting everything on just iQUIC. Make of it what you will.
Too (little too) late
While QUIC v1 might be too early, v2 might come too late. Various advanced features (some of which were in gQUIC), such as forward error correction, multipath and (partial) unreliability are intentionally kept out of v1 to lower the overall complexity. Similarly, major updates to HTTP/3, such as to how cookies work, are left out. In my opinion, H3 is a very demure mapping of HTTP/2 on top of QUIC, with only minor changes. While there are good reasons for this, it means many opportunities for which we might want to use QUIC have to be postponed even longer.
The concept of separating QUIC and HTTP/3 is so that QUIC can be a general-purpose transport protocol, able to carry other application layer data. However, I always struggle to come up with concrete examples for this… WebRTC is often mentioned, and there was a concrete DNS-over-QUIC proposal, but are there any other projects ongoing? I wonder if there would be more happening in this space if some of the advanced features would be in v1. The fact that the DNS proposal was postponed to v2 surely seems to indicate so.
I think it will be difficult to sell QUIC to laymen without these types of new features. 0-RTT sounds nice, but is possibly not hugely impactful, and could be done over TCP. Less Head-of-Line blocking is only good if you have a lot of packet loss. Added security and privacy sounds nice to users, but has little added value besides their main principle. Google touts 3-8% faster searches: is that enough to justify the extra server and setup costs? Does QUIC v1 pack enough of a punch?
Counterarguments
- Browsers will only support QUIC when it’s stable (enough) and users probably won’t notice most bugs
- Debugging QUIC will be done by professionals who can get by with their own tools and logging formats
- HTTP/2 has had some pretty big issues and bugs (which no-one even seemed to notice for a long time) and yet has found a decent uptake
- Even if QUIC doesn’t get a huge uptake in the first two years, it’s still worth it. We’re in this for the next 30 years.
- QUIC v2 will come soon enough, there are already working groups and proposals looking at unreliability, multipath, WebRTC etc.
- The QUIC working group was never intended to bring major changes to HTTP and that work will continue in the HTTP working group
- QUIC’s flexibility ensures that we can now iterate faster on newer features, both on the application and transport layer
- Laymen will follow whatever the big players do (this is how we got into this JavaScript framework mess, remember?)
- A wizard is never late
Conclusion
If you’ve made it through all that: welcome to the end! Sit, have a drink!
I imagine there will be plenty of different feelings across readers at this point (besides exhaustion and dehydration) and that some QUIC collaborators might be fuming. However, keep in mind what I stated in the beginning: this is me trying to take a “Devil’s Advocate” viewpoint, trying to weed out logical errors in arguments pro and con QUIC. Most (all?) of these issues are known to the people who are standardizing QUIC and all their decisions are made after (very) exhaustive discussion and argumentation. I probably even have some errors and false information in my text somewhere, as I’m not an expert on all subtopics (if so, please let me know!). That is exactly why the working groups are built up out of a selection of people from different backgrounds and companies: to try and take as many aspects into consideration as possible. Trade-offs are made, but always for good reasons.
That being said, I still think QUIC might fail. I don’t think the chance is high, but it exists. Conversely, I also don’t think there is a big chance it will succeed from the start and immediately gain a huge piece of the pie with a broader audience outside of the bigger companies. I think the chance is much higher that it fails to find a large uptake at the start, and that it instead has to gain a broad deployment share more slowly, over a few years. I think this will be slower than what we’ve seen with HTTP/2, but (hopefully) faster than IPv6.
I personally still believe strongly in QUIC (I should, I’m betting my PhD on it…). It’s the first major proposed change on the transport layer that might actually work in practice (the arguments in this post are several times worse and more extensive for many previous options). I feel very grateful to have the chance to witness QUIC’s standardization and deployment up close. As it is made to evolve, I think it has all the potential to survive a slower uptake, and remain relevant for decades. The bigger companies will deploy it, debug it, improve it, open source it, and in 5 years time more stuff will be running on QUIC than on TCP.
Thanks to:
- Barry Pollard, writer of the excellent HTTP/2 in action.
- Daniel Stenberg, Daan De Meyer, Loganaden Velvindron, Dmitri Tikhonov, Subodh Iyengar, Mariano Di Martino and other commenters on the original draft of this text.
------------------
quic
- QUIC与HTTP/2结合更加紧密,比如HTTP/2的stream, frame, Header Compression都可以直接映射到QUIC的stream, packet, Header Compression. 从协议分层看这会使得应用层和传输层紧耦合,但是在HTTP/2这个具体场景下,QUIC可以接管HTTP/2的拥塞控制,相较于HTTP/2 over TLS 1.3会受到TCP和HTTP/2两层的拥塞控制来说,无疑具有更直接有效的控制策略。并且,QUIC的拥塞控制是应用层可插件化的。虽然你也可以为TCP编写拥塞控制模块,但那是一个内核模块,稍不注意就会让系统崩溃。如果你有兴趣的话,实现一个QUIC的BBR拥塞控制模块是很容易的。
- QUIC支持连接迁移(Connection Migration),即在客户端切换网络IP以后,也能够保持连接不间断。这对于移动网络这种漫游网络特别友好,再加上QUIC在首次连接上RTT比TLS1.3低25%,因此会有更好的用户体验。虽然Multipath TCP, MPTCP也可以吃支持连接迁移,但这受限于操作系统和网络协议栈,要想普及,任重道远。
- QUIC支持
Forward Error Correction
,利用一定的数据包冗余,提供丢包时恢复丢包数据的能力。减少了包重传的数量,进而减少延迟,提高带宽利用率。如果你使用过kcptun,你会发现它也采用了FEC,对于弱网和高丢包环境下,效果尤其好。 - Improved Handshakes in TLS version 1.3
- An overview of TLS 1.3 and Q&A
- QUIC Wire Layout Specification
- QUIC Crypto
- QUIC FEC v1
- QUIC: next generation multiplexed transport over UDP
当然,QUIC也是有缺点的。比如很多运营商对UDP的QoS级别是比TCP低的,甚至只允许DNS 53端口的UDP包,其他UDP包都是被封掉的。在IDC中,UDP的优先级一般也比TCP低。因此,即使以后大规模普及QUIC,也不会完全替代TCP+TLS,毕竟需要确保在QUIC不可用时,可以回落到TCP+TLS.
总结
在首次连接的时候,无论是TLS1.3还是QUIC都需要1-RTT,在连接复用的情况下,两者才能做到0-RTT. 在HTTP/2的应用场景中,QUIC可以有效的降低首次连接的RTT次数,并且支持连接迁移、FEC以及更加灵活高效的拥塞控制,因此可以在移动网络、弱网环境提供低延迟的用户体验。
扩展阅读:
HTTP/3 已经箭在弦上,你准备好了吗?
HTTP/2 over QUIC 是当前唯一应用落地解决了传输层队头阻塞问题的HTTP实现。那个时候,无论是 HTTP/2 over TCP 还是 HTTP/2 over QUIC(UDP) 都被我们认为是 HTTP/2,只是传输层使用的协议不一样。这种略带暧昧的模糊叫法在2018年11月成为了历史:
在2018年10月28日的邮件列表讨论中,互联网工程任务组(IETF) HTTP和QUIC工作组主席Mark Nottingham提出了将HTTP-over-QUIC更名为HTTP/3的正式请求,以“明确地将其标识为HTTP语义的另一个绑定……使人们理解它与QUIC的不同”,并在最终确定并发布草案后,将QUIC工作组继承到HTTP工作组。在随后的几天讨论中,Mark Nottingham的提议得到了IETF成员的接受,他们在2018年11月给出了官方批准,认可HTTP-over-QUIC成为HTTP/3。
虽然看起来像是之前的 HTTP/2 over QUIC 换了一个名称(从我个人角度理解,取名为 HTTP/2.1也许更合适),但是其背后却体现了 IETF 对 HTTP 未来标准的态度和方向,也许几年以后来看这次名称的确立会更加明白其重要意义。
HTTP/3 与 HTTP/2 over QUIC 的区别
QUIC 将成为一个通用安全传输层协议
当前阶段,Google 实现的 QUIC 与 IETF 实现的 QUIC 是不兼容的。Google 版 QUIC 只能用于 HTTP/2,且在协议层面与 HTTP/2 有一些强绑定。如 QUIC 帧映射 HTTP/2 frame. 这就导致很多大厂都没有跟进 QUIC,使得 HTTP/2 over QUIC 基本只能在 Google 自家的 Chrome, Gmail 等软件中普及使用,一度给行业造成“只有Google在弄”的错觉。
纳入 IETF 以后,显然 Google 就不能这么玩了。QUIC 定位为一个通用安全传输层协议.
可以近似的认为 QUIC over UDP 将成为下一代(或替代)TLS over TCP. 也就是说, QUIC 将能应用于任何应用层协议中,只是当前阶段将优先在 HTTP 中进行应用和验证。
统一使用 TLS 1.3 作为安全协议
2018年,有几个重要的WEB标准终于尘埃落定,其中一个便是 RFC 8446 TLS 1.3. 这个标准对于降低延迟,改善用户体验,尤其是移动端的体验有非常重要的意义。虽然 TLS 1.3和 QUIC 都能做到 0-RTT,从而降低延迟,但是 QUIC 却自顾自地实现了一套安全协议。主要是因为当时 TLS 1.3 标准还没有发布,而 QUIC 又需要一套安全协议:
The QUIC crypto protocol is the part of QUIC that provides transport security to a connection. The QUIC crypto protocol is destined to die. It will be replaced by TLS 1.3 in the future, but QUIC needed a crypto protocol before TLS 1.3 was even started.
如今,TLS 1.3 标准已经发布,而 HTTP/3 也纳入 IETF,因此 QUIC 也就顺理成章的使用 TLS 1.3 作为其安全协议。Google 在这些方面倒是从来都不鸡贼和墨迹,点赞。
使用 QHPACK 头部压缩代替 HPACK
其实,QPACK与HPACK的设计非常类似,单独提出QPACK主要是更好的适配QUIC,同时也是 Google 将 QUIC 从与 HTTP/2 的耦合中抽离出来,与 IETF 标准完成统一的必要一步。
HTTP/3 问题与挑战
UDP 连通性问题
几乎所有的电信运营商都会“歧视” UDP 数据包,原因也很容易理解,毕竟历史上几次臭名昭著的 DDoS 攻击都是基于 UDP 的。国内某城宽带在某些区域更是直接禁止了非53端口的UDP数据包,而其他运营商及IDC即使没有封禁UDP,也是对UDP进行严格限流的。这点上不太乐观,但是我们相信随着标准的普及和推广落地,运营商会逐步改变对UDP流量的歧视策略。国外的情况会稍好一些,根据Google的数据,他们部署的QUIC降级的比例不到10%。
QUIC 不支持明文传输
对于用户来说,这是一个优势,并不是问题。对于国内内容审查环境来说是个不可忽视的坎。但QUIC以后毕竟也是基于TLS协议的,国内HTTPS都能普及下来,QUIC的普及也许会更乐观一些。
UDP 消耗资源多
当前阶段,UDP消耗的CPU资源多,且处理速度慢。这是不争的事实,但是我相信随着UDP应用的增多,内核和硬件的优化一定会跟上,直至达到或超过TCP的性能。而 QUIC 因为实在应用层实现,因此迭代速度更快,部署和更新难度和代价更小,能够一定程度缓解如TCP那样的协议僵化问题。
进一步了解 HTTP/3
如果希望全面的了解 HTTP/3,推荐 Daniel Stenberg (CURL 作者)的 HTTP/3 explained, 如果不想看英文,可以翻阅 Yi Bai 同学翻译了中文版本HTTP/3详解。
-----------------------------
QUIC存在'UDP反射DDoS攻击'漏洞吗?
360信息安全部发布了一篇关于利用 UDP 反射 DDoS 的分析报告:Memcache UDP反射放大攻击技术分析。报告一出,引起了业界的普遍关注。根据文中所述,光是Qrator Labs 在 medium.com 上 批露的一次DDoS攻击看,其攻击流量峰值达到 480Gbps。而360信息安全团队本身也监测和确认有更大的攻击已经实际发生,只是未被公开报道。
而就在这个事件纰漏没多久,我把博客升级为支持基于UDP的QUIC协议来改善小站的访问体验.
本着小站没几人访问的蜜汁自信,当时也没太纠结 QUIC 是否也存在 UDP 反射漏洞。前几天,看到著名博主,阮一峰同学网站被 DDoS 攻击,心里咯噔一下:出来混迟早是要还的,还是填坑为安吧。
什么是 UDP 反射 DDoS 攻击
简单讲,就是攻击者利用IP网络不做真实源地址检查的“设计缺陷“,向提供基于 UDP 服务的服务器发送伪造源地址(一般为被攻击者的主机IP)的 UDP 报文请求,使得这些 UDP 报文的响应数据都会发送给被攻击者主机,这种攻击我们称之为 UDP 反射 DDoS 攻击。
之所以要通过被利用的服务器反射流量到被攻击的服务器,是因为被利用的服务器一般存在流量放大效应。即一个伪造IP的 UDP 请求发送到到被利用服务器后,被利用服务器会发送比请求更多的数据到被攻击者服务器。
被利用服务器输出流量与输入流量的比值我们称之为放大系数。这个系数与被利用服务器所提供的 UDP 服务有关。之前提到的利用 Memcache 漏洞的 DRDoS 攻击,可以获得稳定的 60000 倍放大系数。而我们日常使用的 DNS 则可以轻松的获得 50 倍的放大系数。
由放大系数反推,我们可以知道,如果一个 UDP 服务被利用以后,放大系数小于等于1的话,则不存在利用价值,因为这个时候,只从带宽流量方面考虑的话,还不如直接利用攻击主机对被攻击服务器进行攻击效率高。
QUIC 存在 UDP 反射攻击漏洞吗
按照蛤乎惯例,照顾猴急的同学,先给结论:可以。
QUIC 主要通过以下机制来解决该问题:
- 对于首次发起建立 QUIC 连接的客户端,服务端要求其初始化的 hello 数据包必须完全填充。这个包在 IPv4 下一般是 1370 字节,在 IPv6 下是 1350 字节。在 QUIC 协议中,服务器和客户端数据交互的基本单位是就是 UDP 数据包,而一个全填充的数据包已经达到了数据包大小的上限,因此服务器的响应数据包一定是小于等于这个 hello 数据包的。显然,放大系数小于等于1. 因此,新连接的建立没有反射利用的价值。
- 建立 QUIC 连接后,客户端发出的数据包就不会全填充了。这个时候,如果被 UDP 反射利用,放大系数是大于1的。因此, QUIC 引入了源地址token (source address token):在成功建立 QUIC 连接后,服务器会用发放一个源地址token给客户端,并要求客户端在后续的数据包中带上这个token;服务器只对源地址token有效的数据包进行处理。源地址token中一般包含客户端的源地址和服务器的时间。因此这是一个客户端证明其IP所有权的凭证。
- 由于源地址token可能会被网络中的攻击者嗅探收集,因此 QUIC 设计了一套源地址token的过期和刷新机制。另一方面,每次客户端发送的数据包如果都带上源地址token的话,不仅客户端流量大,服务器验证token也是额外的开销,使得协议延迟变高。因此 QUIC 协议允许客户端按照一个动态策略选择是否在数据包中夹带源地址token:服务器端收集和统计源地址的数据包,当统计到源地址数据包交互响应异常的数量超过阈值时,要求该源地址的客户端必须夹带源地址token, 对于无法提供合法源地址的token的请求进行 reject 处理。
扩展阅读
Modernizing the internet with HTTP/3 and QUIC
As the internet’s protocols evolve, so too do our experiences using them. QUIC, the new transport protocol set to replace TCP, and HTTP/3, the HTTP version running atop QUIC, will start to see wide deployment in 2020. After more than six years of building, reframing, and refinement, HTTP/3 and QUIC are primed to modernize the internet in a number of ways: faster response times, greater accessibility worldwide, and setting the standard for built-in encryption, just to name a few.
Coming soon, we will make HTTP/3 and QUIC available for customers to play with on the Fastly platform, ahead of its wider release, so you can get a taste of what’s to come and help shape where we’re headed. Please send us an email at quic-beta@fastly.com if you’re interested in participating in our upcoming beta program. We are also launching a site that browsers and other clients can use to test for HTTP/3 connectivity: http3.is. This site — inspired by the dancing kame that many of us have used for testing IPv6 connectivity — runs on Fastly’s HTTP/3 and QUIC servers (h2o and quicly). So, open a browser that you’d like to test, point it at http3.is, and see whether your communication uses HTTP/3.
As an editor in the IETF’s QUIC working group, I have been a huge proponent of HTTP/3 and QUIC, and I gave a talk about their potential at Fastly’s customer conference, Altitude NYC, back in November last year. I walked through why these are important to us at Fastly and why they should matter to our customers. Watch the talk in the video below, or read on to delve into some of the important features of QUIC that should make HTTP/3 interesting to our customers.
Hello, faster handshakes
Clients and servers begin their interaction with one another via transport and crypto handshakes. These establish that the two parties are ready to communicate, and set up the ground rules for doing so. TCP and TLS, the prevalent transport and crypto protocols, have to do their handshakes in order, which must occur before any data can be exchanged. This means that with TCP and TLS, an end user spends at least two round trips setting up communication before any web traffic can flow. That’s where QUIC comes in. QUIC collapses the transport and crypto handshakes together. As a result, only one round trip is necessary for setup before traffic can flow.
When re-establishing a connection to a known server, this can be reduced to one round trip with TCP and TLS version 1.3, but that is still a fair bit of time for the web: entire web pages finish transferring and loading in that amount of time. Under the same conditions with QUIC, web traffic goes out right away, without waiting for any setup time. This is what QUIC calls Zero Round-Trip Time or 0-RTT connection establishment, and we expect it to be a significant improvement in latency for our customers’ web pages and apps.
More secure communication
Current web communication is secured to the extent possible by TLS, but this still leaves a fair amount of metadata visible to third parties. Specifically, all of the TCP header is visible, and this can leak a significant amount of information. See this exemplary work, which uses information in the TCP headers to detect what video on Netflix is being watched. TCP headers can also be — and often are — tampered with by third parties, while the communicating client and server are none the wiser.
Encryption and privacy are fundamental to QUIC’s design. All QUIC connections are protected from tampering and disruption, and most of the headers are not even visible to third parties.
Solving the “parking lot problem”
Picture this rather universal problem: as you leave work for the day and head to the parking lot, you pull up a map on your phone to see what traffic looks like on your way home. The map is slow to load because, though you’re still connected to your company’s WiFi, you’re too far away for it to be useful. To load the best route home, you have to turn off WiFi, allowing your phone to connect to your cellular network.
This is known as the parking lot problem, and we all experience some form of this problem during our everyday activities. Your mobile device is capable of speaking to multiple networks. But, for a variety of reasons, it does not quickly detect and switch if the one it is using right now is of terrible quality.
QUIC solves this with connection migration, a feature that allows your connection to the server to move with you as you switch networks. QUIC uses connection identifiers to make this possible: a server hands out these identifiers to a client within a connection. If the client moves to a new network and wishes to continue the connection, it simply uses one of these identifiers in its packets, letting the server know that the client wishes to continue communication but from a new network.
What else will QUIC change?
QUIC keeps flexibility for the future in mind, ensures that applications’ connections are confidential, and promises to provide better internet performance globally. These are all built into the protocol’s design. It’s exciting to think that, very soon, QUIC and HTTP/3 will be working quietly behind the scenes to make the internet better for everyone connected to it. Join us for the journey — email quic-beta@fastly.com to raise your hand for our upcoming beta and be on the forefront of building the internet of tomorrow.
Jana Iyengar is a Product Lead, Infrastructure Services at Fastly, with a focus on transport and networking performance, including building and deploying QUIC and HTTP/3. He is an editor in the IETF’s QUIC working group and he chairs the IRTF’s Internet Congestion Control Research Group (ICCRG). Prior to Fastly, he worked on QUIC and other networking projects at Google, before which he was an Associate Professor of Computer Science at Franklin & Marshall College.
from https://www.fastly.com/blog/modernizing-the-internet-with-http3-and-quic
No comments:
Post a Comment