Why do we need a Connection Pooler?
Saturating db connections is always problematic when you need more & can be rectified by increasing the allowed no of db connections considering database constraints at the same time. Performance issues can still persist if db connection creation/termination time is considerable.
PostgreSQL Error : – FATAL: sorry, too many clients already
Generally, the db connection creation & closing operations are expensive in terms of resources & time. Eg. PostgreSQL forks entirely a new thread/process for a new connection which increases the overall transaction time. The better option is to avoid this irrelevant expense by reusing those already existing/established db connections. Hence, enhance the performance. This is exactly what a connection pooler does.
Those who cannot remember the past are condemned to repeat it – Dynamic Programming.
What is a Connection Pooler?
A connection pooler is an intermediate tool, mainly responsible for managing/handling db connections and provides an abstracted interface to the web servers. It maintains a pool of db connections for future use. It helps you avoid the cost of creating/initiating db connections as the connections are kept dormant and can used as & when required. As a result, pooler allows you to serve more db requests/queries than the allowed no of connections and in a bit lesser time too.
How does it work?
The architecture is pretty straightforward. First of all, the connection pooler would sit between the web-server and database as a catalyst and speeds up the transaction. Only few configuration values need changes at server level. E.g. db port,etc.
The web server can perform transactions on db without having to create/drop any db connection explicitly. Therefore, avoiding the extra overhead. This also means the connection pooler works as a reverse-proxy(likewise). Consequently taking away most of the overhead from database application. Pooler maintains a set of alive db connections. And spins up newer ones only if all existing are exhausted or to maintain a desired count(min_pool_size & reserve_pool_size) as per the specified configurations.
Secondly, the web-server would pick an existing db connection from the pool and would perform the transaction. Once the transaction completes, the connection returns to the pool. Pgbouncer’s internal computations consume additional capacity. But trading off that additional capacity to avoid expense of db connection creation is acceptable.
Note: – Connection pooler doesn’t make db queries execute faster, it rather expedites the process of implementation i.e. queries will hit the db sooner. The query execution time remains same with or without pooler. Query optimisation is entirely a different story.
Pgbouncer is a stable, in-production connection pooler for PostgreSQL. PostgreSQL doesn’t realise the presence of PostgreSQL. Pgbouncer can do load handling, connection scheduling/routing and load balancing.
It provides three types of pooling(default to session) methods, described below: –
1. Session Pooling :- The client uses connection for the entire duration it stays connected & only returns once disconnected.
2. Transaction Pooling :- Client can use connection only for a transaction. Connection is taken away once the transaction is completed. Therefore, session-based features of PostgreSQL can’t be achieved.
3. Statement Pooling :- This is transaction pooling with multi-statement transactions disallowed. This is meant to enforce “autocommit” mode on client, mostly targeted for PL/Proxy.
Check this for compatible PostgreSQL features for different Pooling methods.
Pgbouncer with Odoo
The blog will present you how you can integrate Pgbouncer(for PostgreSQL) with Odoo.
This can help you setup Odoo if you haven’t yet.
How to Setup Pgbouncer with Odoo: –
- Install pgbouncer.
sudo apt install pgbouncer
- Edit the pgbouncer configuration file /etc/pgbouncer/pgbouncer.ini under the databases section.
Add * = host=localhost port=5432 to Map all databases
You can also map databases individually. E.g. db1 =host=localhost port=5432 dbname=db1
- Pgbouncer needs db user credentials. Run below query in psql to save credentials in a file readable to pgbouncer user.
COPY ( SELECT '"' || rolname || '" "' || CASE WHEN rolpassword IS null THEN '' ELSE rolpassword END || '"' FROM pg_authid ) TO '/etc/pgbouncer/userlist.txt';
Note :- Ensure that you update this file every time a new db user is created.
- Set the listen_port and listen_address accordingly in pgbouncer.ini.
- Set the auth_type to md5. Auth type md5 performs client authentication with the help of file generated in Step 3 but trust doesn’t. If you set auth_type as trust, Pgbouncer will provide access to databases without any authentication.
- Change the db_port in Odoo config to listen_port set in step 4.
- Restart Pgbouncer & then Odoo.
Now the db connections route via Pgbouncer.
Note: – The db_password should not be False in odoo config if auth_type is md5.
Pgbouncer offers deep control over its operations. E.g. you can set the minimum and maximum pool size. You also get to control the max connections(max_db_connections) to a database and max connections(max_user_connections) that can be owned by any user. Some precautionary controls like server_reset_query are also available.
Check this for more such options.
Don;t forget to check the log file /var/log/postgresql/pgbouncer.log.
Use pgbench to load test & check the difference in performance. Pgbench shows the latency, avg execution time or so. Load test db with or without pgbouncer to see the actual difference. In the below small load test ran locally, the latency avg has reduced to one-tenth with pgbouncer. Note that only “select query” was used. For more complex queries the results will differ. Difference will be more significant for large no of smaller/lighter queries. For larger/heavy queries, the performance difference may not be that huge. Query optimisation is still recommended.
DB Queries directly on PostgreSQL
DB Queries via Pgbouncer
Connection Poolers help when incoming db requests tend to exceed the database connections limit. Poolers can handle the load instead and the databases have lesser chances of freezing then. So, configuration of both pooler and database should resemble. Databases remain unaware of the heavy traffic. Some of the databases have some sort of inbuilt poolers. Connection poolers don’t always make sense. Avoid poolers in case you have lesser incoming db requests. You need to consider the trade off between the resources poolers consume and resources they save for you. The performance difference can be vital when multiple clients are making multiple db requests.
Above dictated configuration is pretty simple. Pgbouncer can further be integrated with HAProxy to offer High Availability. It is also possible to have cascaded Pgbouncers if required. Read this to see how pgbouncer manages 20,000 tps on one node, making it an integral part of Russia’s largest classified site. That post describes different Pgbouncer configurations too.
Check official documentation for more details.
Try poolers if you have not.
First ever!!Thank you for reading this!! Hope that helps.
For any kind of query or suggestion, you can reach us at firstname.lastname@example.org or just raise a ticket at our HelpDesk system.