Tomas Vondra

Tomas Vondra

blog about Postgres code and community

How are committers selected?

At a couple recent conferences, I got to describe the process Postgres uses to select new committers/maintainers. Usually to users and developers using Postgres, but in some cases it was unclear even to experienced Postgres contributors. The official docs are rather brief, and don’t explain various important details. Let me explain how I understand the informal process, who’s responsible for what etc. This post is not meant to give you advice on how to become a committer, that’s a far more subjective question. Perhaps in some future post, not sure yet.

The real cost of random I/O

The random_page_cost was introduced ~25 years ago, and since the very beginning it’s set to 4.0 by default. The storage changed a lot since then, and so did the Postgres code. It’s likely the default does not quite match the reality. But what value should you use instead? Flash storage is much better at handling random I/O, so maybe you should reduce the default? Some places go as far as recommending setting it to 1.0, same as seq_page_cost. Is this intuition right?

The AI inversion

If you attended FOSDEM 2026, you probably noticed discussions on how AI impacts FOSS, mostly in detrimental ways. Two of the three keynotes in Janson mentioned this, and I assume other speakers mentioned the topic too. Moreover, it was a very popular topic in the “hallway track.” I myself chatted about it with multiple people, both from the Postgres community and outside of it. And the experience does not seem great …

Stabilizing Benchmarks

I do a fair amount of benchmarks as part of development, both on my own patches and while reviewing patches by others. That often requires dealing with noise, particularly for small optimizations. Here’s an overview of ways I use to filter out random variations / noise. Most of the time it’s easy - the benefits are large and obvious. Great! But sometimes we need to care about cases when the changes are small (think less than 5%).

Don't give Postgres too much memory (even on busy systems)

A couple weeks ago I posted about how setting maintenance_work_mem too high may make things slower. Which can be surprising, as the intuition is that memory makes things faster. I got an e-mail about that post, asking if the conclusion would change on a busy system. That’s a really good question, so let’s look at it. To paraphrase the message I got, it went something like this: Lower maintenance_work_mem values may split the task into chunks that fit into the CPU cache. Which may end up being faster than with larger chunks.