Connection pooling in Scala

In this blog I am going to describe connection pooling in scala. Before going into the details I want to explain why and when connection pooling should be used.

Why connection pooling ??

For efficient use of database connection we need to care about following two things.
1. Creating a connection is an expensive operation. It is not senseful to create new connection each and every time when database access is required.

2. It is also important to close connection and other database resources.

Connection pooling provide one stop solution of above described issues.
In connection pooling , connection or set of connection objects are created single time ( usually at the start of application) and would be maintained so that the connections can be reused when future requests to the database are required. Connection pools are used to enhance the performance of executing commands on a database.

When use connection pooling
If database access is required more frequently in application then connection pooling must be used.

There are many open source connection pooling library available. I preferred BoneCP library.(In this discussion i am using a postgres database for connection pooling.)

1. add boneCP dependencies in build.sbt


name := "sbtprojectwithconnectionpooling"

version := "1.0"

scalaVersion := "2.10.3"

organization := "Knoldus"

libraryDependencies ++= Seq(
"ch.qos.logback" % "logback-classic" % "1.0.13",
"postgresql" % "postgresql" % "9.1-901.jdbc4",
"com.jolbox" % "bonecp" % "0.8.0.RELEASE"

2. Create connection pool:

Creation of connection pool requied some configuration parameter:

1. jdbcUrl- Jdbc database connection Url.
Default: None

2. username- Database username.
Default: None

3. password- Database password.
Default: None

4. PartitionCount- In order to reduce lock contention and thus improve performance, each incoming connection request picks off a connection from a pool that has thread-affinity, i.e. pool[threadId % partition_count]. The higher this number, the better your performance will be for the case when you have plenty of short-lived threads. Beyond a certain threshold, maintenence of these pools will start to have a negative effect on performance (and only for the case when connections on a partition start running out).
Default: 1, minimum: 1, recommended: 3-4 (but very app specific)

5. maxConnectionsPerPartition- The number of connections to create per partition. Setting this to 5 with 3 partitions means you will have 15 unique connections to the database. Note that BoneCP will not create all these connections in one go but rather start off with minConnectionsPerPartition and gradually increase connections as required.

6. minConnectionsPerPartition-The number of connections to start off with per partition.

for more configuration information:


import com.jolbox.bonecp.BoneCP
import com.jolbox.bonecp.BoneCPConfig
import org.slf4j.LoggerFactory
import java.sql.Connection
import java.sql.DriverManager</code>

object ConnectionPool {
val logger = LoggerFactory.getLogger(this.getClass)

private val connectionPool = {
try {
val config = new BoneCPConfig()
config.setCloseConnectionWatch(true)// if connection is not closed throw exception
config.setLogStatementsEnabled(true) // for debugging purpose
Some(new BoneCP(config))
} catch {
case exception: Exception =>;
logger.warn("Error in creation of connection pool"+exception.printStackTrace())

def getConnection: Option[Connection] = {
connectionPool match {
case Some(connPool) => Some(connPool.getConnection)
case None => None

3. Take connection from connection pool and after use return connection to connection pool by calling close method on connection.(Here close method not closing connection, it return connection to connection pool)

import java.sql.ResultSet
import org.slf4j.LoggerFactory</code>

object ConnectionPoolTest extends App {

val logger = LoggerFactory.getLogger(this.getClass)
val query = "select * from emp"
ConnectionPool.getConnection match {
case Some(connection) =>
try {
val statement = connection.createStatement()
val result = statement.executeQuery(query)
val scalaResultSet = new MyResultSet(result) map { r =&gt; (r.getString(1), r.getString(2), r.getString(3)) }
scalaResultSet foreach println _
} catch {
case exception: Exception =>
logger.warn("Error in excuting query" + exception.printStackTrace())
} finally {
if (!connection.isClosed()) connection.close()
case None =>;
println("Not geting connection from connection pooling")


class MyResultSet(rs: ResultSet) extends Iterator[ResultSet] {
def hasNext: Boolean =
def next(): ResultSet = rs

4. Run sbt project:

sky@Sky:~/knols_sessions/sbtprojectwithconnectionpooling$ sbt run
Loading /home/sky/scalatool/sbt-0.13/bin/sbt-launch-lib.bash
[info] Loading project definition from /home/sky/knols_sessions/sbtprojectwithconnectionpooling/project
[info] Set current project to sbtprojectwithconnectionpooling (in build file:/home/sky/knols_sessions/sbtprojectwithconnectionpooling/)
[info] Compiling 1 Scala source to /home/sky/knols_sessions/sbtprojectwithconnectionpooling/target/scala-2.10/classes...
[info] Running
15:38:15.956 [run-main] WARN com.jolbox.bonecp.BoneCP - Thread close connection monitoring has been enabled. This will negatively impact on your performance. Only enable this option for debugging purposes!
15:38:16.088 [run-main] DEBUG com.jolbox.bonecp.StatementHandle - select * from emp
15:38:16.111 [BoneCP-pool-watch-thread] DEBUG com.jolbox.bonecp.PoolWatchThread - Terminating pool watch thread
15:38:16.111 [BoneCP-pool-watch-thread] DEBUG com.jolbox.bonecp.PoolWatchThread - Terminating pool watch thread
15:38:16.112 [BoneCP-pool-watch-thread] DEBUG com.jolbox.bonecp.PoolWatchThread - Terminating pool watch thread
15:38:16.111 [BoneCP-pool-watch-thread] DEBUG com.jolbox.bonecp.PoolWatchThread - Terminating pool watch thread</code>

Download complete code example from: here

About Satendra Kumar

Satendra kumar is the Senior Associate @ Sapient Global Market. In his 5 years of experience he has become developer with proven experience in architecting and developing the Reactive Systems and Big Data Platform with Scala, Akka, Spark, Kafka & Hadoop. He is the big fan of Scala. Satendra has a B.Tech in Computer Engineering from Uttar Pradesh Technical University.
This entry was posted in Scala. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s