I have a basic pgbouncer configuration set up on an Amazon EC2 instance.
My client code (an AWS Lambda function, or a localhost webserver when developing) is making SQL queries to my database through the pgbouncer.
Currently, each query is taking 150-200ms to execute, with about 80% of that being the time it takes to get the connection.
Here's how I'm getting a connection:
long start = System.currentTimeMillis();
Connection conn = DriverManager.getConnection(this.url, this.username, this.password);
log.info("Got connection in " + (System.currentTimeMillis() - start) + "ms")
this.url
is simply the location of the pgbouncer instance. Here's what the measured latency looks like, where Got connection
is from the above code snippet and Executed in
is another timing that measures the elapsed duration after a PreparedStatement
has been executed. The first connection is usually a bit slow which is fine, subsequent ones take around 100ms pretty consistently.
DBManager - Got connection in 190ms
DBManager - Executed in 232ms
DBManager - Got connection in 108ms
DBManager - Executed in 132ms
DBManager - Got connection in 108ms
DBManager - Executed in 128ms
Is there any way to make this faster? Or am I basically stuck with a minimum ~100ms latency on my requests? I get similar speeds from Lambda and localhost, and unfortunately I can't throw my Lambda into the same VPC because of the occasional 8-10 second cold start delay from setting up a new Elastic Network Interface when using a Lambda in a VPC.
This is my first time working with this kind of setup so I don't really know where to start. Could I squeeze out higher speed by adding more power (RAM/CPU) to the database or pgbouncer? Should I not get a new connection for every request (but this would mean having a connection pool per Lambda and then a separate pgbouncer pool)?
I feel like this is surely a pretty common problem so there must be some good ways of solving it, but I haven't been able to find anything.