Control vs. Automation: The Great PostgreSQL Autotune Debate
How much control should we hand over to automated tools? That was the central question of a debate on PostgreSQL autotuning at PGDay Lowlands 2025.
On one side of the stage were the proponents of automation. On the other, the advocates for control and human expertise, including our own Guy Gyles, Senior Database Architect and Partner here at Zebanza. The discussion that followed was a fascinating look into the future of database management.
AI, like ChatGPT, can be confidently wrong. For a database, which is supposed to be a source of truth, this is a fundamental problem.
End-users value stable and predictable performance above all else. An improvement on Tuesday is worthless if it comes with unexpected slowdowns on Monday and Wednesday.
The context determines the right tool for the job.
4/11/2025
Why Shouldn’t We Automate Databases Completely?
The debate kicked off with the opposition laying out their core concerns. The arguments weren’t against technology itself, but against the premature or absolute trust in automated systems. One of the first points raised was startlingly simple: AI, like ChatGPT, can be confidently wrong. For a database, which is supposed to be a source of truth, this is a fundamental problem. If an AI “tunes” a query in a way that changes the result, it undermines the very purpose of the database.
The second major point hit close to home for anyone who manages production systems: end-users value stable and predictable performance above all else. A small percentage improvement on Tuesday is worthless if it comes with unexpected slowdowns on Monday and Wednesday. Predictability is a feature, not a bug.
This is where Guy stepped in to build on the argument. For him, the issue boils down to two things: control and context.
“I still want to have control… most of the time the software doesn’t know everything about your system.“
He argued that an autotuning tool has a narrow view. It sees the database, but it doesn’t see the whole picture—the operating system, the network, the storage, or the new application release that the development team is about to push. This lack of holistic system awareness can lead to tuning decisions that are locally optimal but globally disastrous.
And, as Guy candidly put it, “I don’t like surprises. And neither do users.” The goal should be proactive collaboration with developers, testing new workloads before they hit production, not reacting to a performance issue that an algorithm has to “learn” its way out of over several days.
The Case for Automation in Databases
The proponents of autotune offered a compelling counter-narrative. They asked us to think about other technological leaps, like Google’s autonomous driving project. A decade ago, it seemed impossible. Today, it’s a reality in several cities and is fundamentally safer than human driving. The argument is that database tuning is a similarly complex problem that can be solved with enough hard work and wisdom.
They clarified that they aren’t talking about a general AI that can write poetry. They’re talking about a specialized system that speaks one language fluently: PostgreSQL. The goal of such a system is to optimize for specific Key Performance Indicators (KPIs) that are defined by humans.
Furthermore, they pointed out that PostgreSQL already has elements of autotuning. Over the years, some manual parameters have been removed because better, automated algorithms were found. They see AI as the next logical step in this evolution, a way to create repeatable practices and free up brilliant human minds from the repetitive task of tuning thousands of databases. Instead of fearing the machine, we should see it as a powerful tool that helps us delegate tasks, just as we delegate query execution plans to the optimizer.
Where Do We Go From Here?
As the debate progressed, a consensus began to emerge. The issue isn’t a simple binary choice between “man” and “machine.” The autonomous car, as Guy pointed out, is designed for safety and efficiency, not to win a Formula 1 race. The context determines the right tool for the job.
The final agreement was that the path forward lies in nuance. There are absolutely things in 2025 that can and should be automated. But there are other, more complex challenges that still require deep expertise and a complete understanding of the business and technical environment.
For us at Zebanza, this debate reinforces a core belief that drives our entire approach. We see the value in automation, but we know it’s not a silver bullet. Just like a zebra’s stripes, every client’s setup is unique.
A generic algorithm can’t see that, but a dedicated partner can. The future isn’t about replacing the expert, but rather about empowering them with better tools to deliver the predictable, stable, and high-performing systems our clients depend on.

Guy Gyles

