使用pipeline提高redis并发性能

redis的pipeline机制在某些场景下能够提高web服务qps,降低redis-server端的cpu损耗。比如同时多次请求redis数据,请求结果之间互不依赖;或者替换lua数组下标方式访问KEYS[i],避免集群环境脚本访问限制。本文测试pipeline方式对WebServer端和RedisServer端的性能影响。

测试环境及工具

  1. web-server: mac, golang-gin框架
  2. redis-server: centos6.5, 3.2.12单机版
  3. 压力负载: wrk, go-hey
  4. cpu性能监控: sar, ksar

测试代码

同步多次请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func Do1(c *gin.Context) {
conn := pool.Get()
defer conn.Close()
v1, _ := redigo.String(conn.Do("HGET", "foo1", "val"))
v2, _ := redigo.String(conn.Do("HGET", "foo2", "val"))
v3, _ := redigo.String(conn.Do("HGET", "foo3", "val"))
v4, _ := redigo.String(conn.Do("HGET", "foo4", "val"))
v5, _ := redigo.String(conn.Do("HGET", "foo5", "val"))
v6, _ := redigo.String(conn.Do("HGET", "foo6", "val"))
v7, _ := redigo.String(conn.Do("HGET", "foo7", "val"))
v8, _ := redigo.String(conn.Do("HGET", "foo8", "val"))
resp := Resp{}
resp.Data = []string{v1, v2, v3, v4, v5, v6, v7, v8}
c.JSON(http.StatusOK, resp)
return
}

pipeline请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
func Do2(c *gin.Context) {
conn := pool.Get()
defer conn.Close()
conn.Send("HGET", "foo1", "val")
conn.Send("HGET", "foo2", "val")
conn.Send("HGET", "foo3", "val")
conn.Send("HGET", "foo4", "val")
conn.Send("HGET", "foo5", "val")
conn.Send("HGET", "foo6", "val")
conn.Send("HGET", "foo7", "val")
conn.Send("HGET", "foo8", "val")
resp := Resp{}
resp.Data = make([]string, 0)
pipe_prox, _ := redigo.Values(conn.Do(""))
for _, v := range pipe_prox {
vv, _ := redigo.String(v, nil)
resp.Data = append(resp.Data, vv)
}
c.JSON(http.StatusOK, resp)
return
}

web-server端性能表现

同步多次请求

1
wrk -c 30 -t 2 -d 10s 'http://127.0.0.1:8888/do1'
1
2
3
4
5
6
7
8
Running 10s test @ http://127.0.0.1:8888/do1
2 threads and 30 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.41ms 1.13ms 19.05ms 82.91%
Req/Sec 2.35k 223.77 2.71k 74.50%
46845 requests in 10.01s, 7.37MB read
Requests/sec: 4681.74
Transfer/sec: 754.38KB

pipeline请求

1
wrk -c 30 -t 2 -d 10s 'http://127.0.0.1:8888/do2'
1
2
3
4
5
6
7
8
Running 10s test @ http://127.0.0.1:8888/do2
2 threads and 30 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.07ms 684.99us 16.05ms 76.49%
Req/Sec 7.29k 500.40 8.18k 81.19%
146603 requests in 10.10s, 23.07MB read
Requests/sec: 14513.63
Transfer/sec: 2.28MB

redis-server端性能表现

request_load

同步多次请求

1
hey -c 30 -q 900 -z 40s -m GET http://127.0.0.1:8888/do1
1
2
3
4
5
6
Summary:
Total: 40.0042 secs
Slowest: 0.2279 secs
Fastest: 0.0031 secs
Average: 0.0097 secs
Requests/sec: 3086.1278

pipeline请求

1
hey -c 30 -q 100 -z 40s -m GET http://127.0.0.1:8888/do2
1
2
3
4
5
6
Summary:
Total: 40.0026 secs
Slowest: 0.0268 secs
Fastest: 0.0004 secs
Average: 0.0035 secs
Requests/sec: 2996.2303

cpu 负载

cpu 负载曲线

结论

测试过程向redis-server请求8次数据,pipeline方式的web-qps是非pipeline方式的3倍;若控制两种方式的qps相同,则pipeline方式的cpu使用率约为非pipeline方式的一半。为提高redis的并发性能提供了一种途径。同时要注意控制单次pipeline的数据量,避免单次请求耗时而阻塞其他redis请求。

------ 本文结束------