PG中文社区 /
mdi-home
首页 社区新闻 中文文档 加入ACE {{ item.text }} 登录
mdi-home 首页 mdi-chat-processing 社区新闻 mdi-book-open-variant 中文文档 mdi-account-multiple-check 加入ACE mdi-file-multiple-outline 相关资料 mdi-blank {{item.text}} mdi-exit-to-app 退出账号
PostgreSQL 并行计算tpc-h测试和优化分析

原作者:digoal/德哥  创作时间:2016-11-14 23:57:14+08  
doudou586 发布于2016-11-14 23:57:14           评论: 3   浏览: 6947   顶: 692  踩: 785 

PostgreSQL 并行计算tpc-h测试和优化分析


作者: digoal

日期: 2016-11-14

背景

PostgreSQL 9.6首次推出支持聚合、全表扫描、HASH JOIN、nestloop的并行计算。

https://www.postgresql.org/docs/9.6/static/release-9-6.html

Parallel queries (Robert Haas, Amit Kapila, David Rowley, many others)   

With 9.6, PostgreSQL introduces initial support for parallel execution of large queries.   

Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized.   

Hash joins and nested loops can be performed in parallel, as can aggregation (for supported aggregates).   

Much remains to be done, but this is already a useful set of features.   

Parallel query execution is not (yet) enabled by default. To allow it, set the new configuration parameter max_parallel_workers_per_gather to a value larger than zero.   

Additional control over use of parallelism is available through other new configuration parameters force_parallel_mode,
parallel_setup_cost, parallel_tuple_cost, and min_parallel_relation_size.   

Provide infrastructure for marking the parallel-safety status of functions (Robert Haas, Amit Kapila)   

那么他对TPC-H有多少的性能提升呢?

Robert的PostgreSQL 9.6 TPC-H测试说明

并行度设为4,22条查询有17条使用了并行执行计划。

15条比单核执行更快,其中11条提升至少2倍,1条速度未变化,还有1条变慢。

I decided to try out parallel query, as implemented in PostgreSQL 9.6devel, on the TPC-H queries.

To do this, I followed the directions at https://github.com/tvondra/pg_tpch - thanks to Tomas Vondra for those instructions.

I did the test on an IBM POWER7 server provided to the PostgreSQL community by IBM.

I scaled the database to use 10GB of input data; the resulting database size was 22GB, of which 8GB was indexes.

I tried out each query just once without really tuning the database at all, except for increasing shared_buffers to 8GB.

Then I tested them again after enabling parallel query by configuring max_parallel_degree = 4.

Of the 22 queries, 17 switched to a parallel plan, while the plans for the other 5 were unchanged.

Of the 17 queries where the plan changed, 15 got faster, 1 ran at the same speed, and 1 got slower.

11 of the queries ran at least twice as fast with parallelism as they did without parallelism.

Here are the comparative results for the queries where the plan changed(Parallel vs 单核执行):

Q1: 229 seconds → 45 seconds (5.0x)  

Q3: 45 seconds → 17 seconds (2.6x)  

Q4: 12 seconds → 3 seconds (4.0x)  

Q5: 38 seconds → 17 seconds (2.2x)  

Q6: 17 seconds → 6 seconds (2.8x)  

Q7: 41 seconds → 12 seconds (3.4x)  

Q8: 10 seconds → 4 seconds (2.5x)  

Q9: 81 seconds → 61 seconds (1.3x)  

Q10: 37 seconds → 18 seconds (2.0x)  

Q12: 34 seconds → 7 seconds (4.8x)  

Q15: 33 seconds → 24 seconds (1.3x)  

Q16: 17 seconds → 16 seconds (1.0x)  

Q17: 140 seconds → 55 seconds (2.5x)  

Q19: 2 seconds → 1 second (2.0x)  

Q20: 70 seconds → 70 seconds (1.0x)  

Q21: 80 seconds → 99 seconds (0.8x)  

Q22: 4 seconds → 3 seconds (1.3x)  

Linear scaling with a leader process and 4 workers would mean a 5.0x speedup, which we achieved in only one case.

However, for many users, that won't matter: if you have CPUs that would otherwise be sitting idle, it's better to get some speedup than no speedup at all.

Of course, I couldn't resist analyzing what went wrong here, especially for Q21, which actually got slower.

Q21变慢的原因,是work_mem的配置问题,以及当前HASH JOIN并行机制的问题。

To some degree, that's down to misconfiguration:

I ran this test with the default value of work_mem=4MB, but Q21 chooses a plan that builds a hash table on the largest table in the database, which is about 9.5GB in this test.

Therefore, it ends up doing a 1024-batch hash join, which is somewhat painful under the best of circumstances.

With work_mem=1GB, the regression disappears, and it's 6% faster with parallel query than without.

目前HASH JOIN,每一个并行的WORKER都需要一份hash table的拷贝,如果大表hash的话,会在大表基础上放大N倍的CPU和内存的开销。

小表HASH这个问题可以缓解。

However, there's a deeper problem, which is that while PostgreSQL 9.6 can perform a hash join in parallel, each process must build its own copy of the hash table.

That means we use N times the CPU and N times the memory, and we may induce I/O contention, locking contention, or memory pressure as well.

It would be better to have the ability to build a shared hash table, and EnterpriseDB is working on that as a feature, but it won't be ready in time for PostgreSQL 9.6, which is already in feature freeze.

Since Q21 needs a giant hash table, this limitation really stings.

HASH JOIN可以提升的点,使用共享的HASH TABLE,而不是每个woker process都拷贝一份。

这个可能要等到PostgreSQL 10.0加进来了。

In fact, there are a number of queries here where it seems like building a shared hash table would speed things up significantly: Q3, Q5, Q7, Q8, and Q21.

An even more widespread problem is that, at present, the driving table for a parallel query must be accessed via a parallel sequential scan;that's the only operation we have that can partition the input data.

另一个提升的点,bitmap scan,因为有几个QUERY的瓶颈是在bitmap scan哪里,但是目前并行计算还不支持bitmap scan。

Many of these queries - Q4, Q5, Q6, Q7, Q14, Q15, and Q20 - would have been better off using a bitmap index scan on the driving table, but unfortunately that's not supported in PostgreSQL 9.6.

We still come out ahead on these queries in terms of runtime because the system simply substitutes raw power for finesse:

with enough workers, we can scan the whole table quicker than a single process can scan the portion identified as relevant by the index.

However, it would clearly be nice to do better.

Four queries - Q2, Q15, Q16, Q22 - were parallelized either not at all or only to a limited degree due to restrictions related to the handling of subqueries,about which the current implementation of parallel query is not always smart.

Three queries - Q2, Q13, and Q15 - made no or limited use of parallelism because the optimal join strategy is a merge join, which can't be made parallel in a trivial way.

One query - Q17 - managed to perform the same an expensive sort twice, once in the workers and then again in the leader.

This is because the Gather operation reads tuples from the workers in an arbitrary and not necessarily predictable order; so even if each worker's stream of tuples is sorted, the way those streams get merged together will probably destroy the sort ordering.

There are no doubt other issues here that I haven't found yet, but on the whole I find these results pretty encouraging.

Parallel query basically works, and makes queries that someone thought were representative of real workloads significantly faster.

There's a lot of room for further improvement, but that's likely to be true of the first version of almost any large feature.

并行需要继续提升的点

HASH JOIN可以提升的点,使用共享的HASH TABLE,而不是每个woker process都拷贝一份。

这个可能要等到PostgreSQL 10.0加进来了。

另一个提升的点,bitmap scan,因为有几个QUERY的瓶颈是在bitmap scan哪里,但是目前并行计算还不支持bitmap scan。支持merge join。

/images/news/2016/pg_bot_banner.jpg


评论:3   浏览: 6947                   顶: 692  踩: 785 

请在登录后发表评论,否则无法保存。

1# __ xcvxcvsdf 回答于 2024-11-15 21:15:48+08
https://aihuishou.tiancebbs.cn/sh/1837.html https://aihuishou.tiancebbs.cn/sh/348.html https://sh.tiancebbs.cn/hjzl/462718.html https://taicang.tiancebbs.cn/hjzl/459825.html https://www.tiancebbs.cn/ershoufang/467379.html https://sh.tiancebbs.cn/hjzl/467449.html https://www.tiancebbs.cn/ershouwang/468790.html https://zulin.tiancebbs.cn/sh/1645.html https://su.tiancebbs.cn/hjzl/471084.html https://su.tiancebbs.cn/hjzl/466163.html https://taicang.tiancebbs.cn/hjzl/462557.html https://aihuishou.tiancebbs.cn/sh/4219.html https://aihuishou.tiancebbs.cn/sh/675.html https://alt.tiancebbs.cn/qths/472385.html https://aihuishou.tiancebbs.cn/store/2775/info-page-130.html https://su.tiancebbs.cn/hjzl/462268.html https://www.tiancebbs.cn/ershoufang/473539.html

2# __ xcvxcvsdf 回答于 2024-10-28 17:25:31+08
http://cf.lstcxxw.cn/ktvjz/ https://shenchouwan.tiancebbs.cn/ http://shengshun.njtcbmw.cn/neijiang/ http://bjtcxxw.cn/csjz/ https://sheqi.tiancebbs.cn/ http://bjtcxxw.cn/ktvqz/ http://ty.cqtcxxw.cn/contactus/ https://shibanzhen.tiancebbs.cn/ http://jinqiang.ahtcbmw.cn/yi-chun/ https://szyqjjh.tiancebbs.cn/ http://wogao.ahtcbmw.cn/haidong/ http://cf.lstcxxw.cn/wqq/ https://pdxqsh.tiancebbs.cn/ http://gx.lztcxxw.cn/hengshui/ http://cf.lstcxxw.cn/scsn/ http://fuyang.tjtcbmw.cn/ntx/ http://jingren.hftcbmw.cn/danzhou/

3# __ xiaowu 回答于 2024-04-22 09:30:17+08
送别长辈去世的句子:https://www.nanss.com/yulu/2288.html 大海作文:https://www.nanss.com/xuexi/2192.html 暗恋的文案句子:https://www.nanss.com/wenan/2382.html 窗边的小豆豆读后感:https://www.nanss.com/xuexi/2119.html 疫情朋友圈最暖心的话:https://www.nanss.com/yulu/2317.html 人生感悟的句子:https://www.nanss.com/yulu/2498.html 钱学森观后感:https://www.nanss.com/xuexi/2102.html ins超火短句英文:https://www.nanss.com/shenghuo/1584.html 交通安全警示语:https://www.nanss.com/shenghuo/2439.html 学生党支部工作总结:https://www.nanss.com/gongzuo/2055.html 节约用水倡议书:https://www.nanss.com/shenghuo/2296.html 赞美白衣天使简短语句:https://www.nanss.com/yulu/2116.html 宝宝学走路的唯美句子发朋友圈:https://www.nanss.com/wenan/1542.html 工作总结范文:https://www.nanss.com/gongzuo/2301.html 夸公司好的文案:https://www.nanss.com/wenan/1898.html 新闻消息范文:https://www.nanss.com/shenghuo/2093.html 励志的句子经典语句:https://www.nanss.com/yulu/2493.html 关于关爱的作文:https://www.nanss.com/xuexi/2176.html 状物作文:https://www.nanss.com/xuexi/2166.html 人生很长也很短经典语录:https://www.nanss.com/yulu/1748.html 无问西东观后感:https://www.nanss.com/xuexi/2174.html 我心中的语文:https://www.nanss.com/xuexi/2076.html 关于异地恋的文案:https://www.nanss.com/wenan/2380.html 不一样的我作文:https://www.nanss.com/xuexi/2163.html 形容团队很棒的句子接地气:https://www.nanss.com/gongzuo/2013.html 防疫美文:https://www.nanss.com/yuedu/2403.html 一家四口群名:https://www.nanss.com/mingcheng/2415.html 游戏id取名鬼才:https://www.nanss.com/mingcheng/1810.html 最幽默的戒烟宣言:https://www.nanss.com/shenghuo/2338.html 伤感唯美的句子:https://www.nanss.com/wenan/1958.html



发表评论:
加入我们
QQ群1:5276420
QQ群2:3336901
QQ群3:254622631
文档群:150657323
文档翻译平台:按此访问
社区邮件列表:按此订阅
商业支持
扫码关注
加入我们
QQ群1:5276420
QQ群2:3336901
QQ群3:254622631
文档群:150657323
文档翻译平台:按此访问
社区邮件列表:按此订阅
商业支持
扫码关注
© PostgreSQL中文社区 ... (自2010年起)