Database Performance Tuning
Maximizing the performance of the Iotellect Server database is a key factor in ensuring the high performance of the whole Iotellect installation. Here is a short list of generic database performance optimization hints:
Iotellect Server uses a lot of INSERT and UPDATE queries, so make sure to configure your database server for INSERT/UPDATE performance.
The number of allowed simultaneous connections must be greater than the Maximum Connection Pool Size global configuration setting.
When variables values containing large binary blocks (sounds, images, firmware files) are stored in the database, Iotellect Server saves them using a single INSERT/UPDATE transaction. Thus, the maximum size of data transferred in one transaction should not be limited, or should be limited by a sufficiently large value (not less than 100 MB).
For large installations, we recommend disabling synchronous flushing of INSERT'ed or UPDATE'd data to the disk upon transaction commit. This can result in significant performance improvements.
If the database server is running on the same machine as Iotellect Server, we recommend allowing it to use 25-40% of the available RAM, leaving the rest for Iotellect. Also, avoid swapping.
Database Connections Pool Performance Tuning
This advanced section explains how Iotellect Server manages database connection pool.
Iotellect Server uses Java connection pooling library called c3p0 to manage database connections. The default settings of the c3p0 library can be overridden by adding new values to the hibernate.properties
file located in the Iotellect Server installation directory.
Here is an example of hibernate.properties
file content:
# Ensure proper database problem handling
hibernate.c3p0.checkoutTimeout=30000
# Too high idle time is known to cause errors caused by DBMS-initiated disconnections
hibernate.c3p0.maxIdleTime=300
The configuration settings of the c3p0 library are described here: https://www.mchange.com/projects/c3p0/. All c3p0 properties should be prefixed with "hibernate.
".
Tune up Relational Database Connection Pool Size
Iotellect Server Database configuration has two options that significantly affect the performance of relational database access:
Minimum connection pool size
Maximum connection pool size
![]() | A connection pool is a cache of database connections maintained by the database so that the connections can be reused when the database receives future requests for data. Connection pools are used to enhance the performance of executing commands on a database. |
There are several general rules for tuning up the pool size:
Maximum connection pool size should be larger than the number of simultaneously connected devices with a high data transfer rate (i.e. frequent synchronizations or large number of device-generated events). For example, if you monitor 2000 devices that are polled once per hour you may leave maximum pool size as low as 200, but if you have 500 devices synchronized with the server every 5 minutes it's better to increase the maximum pool size to 500.
Maximum pool size must be set lower than the total number of connections allowed by the backend database to avoid database-generated errors, such as "too many connections".
For large installations where peak activity is significantly higher than average activity, it is recommended to increase the minimum pool size up to 20-50% of the maximum pool size.
Database Cleanup
![]() | Database storage in Iotellect is engineered to be very resilient, however, it is highly recommended to stop the server and make a database backup before purging any event table or making any other direct changes to the database. |
The Iotellect Server database mostly stores configuration and event data. While the volume of configuration data is normally low and does not tend to grow with time, the number of events (including variable value change events) may grow quickly, inflating the database size to many gigabytes.
Adjustment of event storage periods will only take effect with time, so manual database cleanup may sometimes have sense.
To estimate database volume and structure, run View Database Statistics action from Root context. This action will show Event Statistics, i.e. number of events in every event container table and their distribution by types and source contexts, as well as total number of events in every table.
The basic database cleanup concept can be formulated as follows:
![]() | Purging any Iotellect Server database table that contains events while the server is not running will not have any effect on normal server operation when the server is restarted. |
This effectively means it's possible to purge almost any table in Iotellect Server database, except for ag_properties, and other "special" tables described in the Database Schema article. Another option is purging tables via Execute Native Query action without stopping the server, e.g. by running DELETE * FROM ag_xxx_change
or a similar update query.
![]() | Ensure that every database user account performing maintenance operations possesses sufficient permissions to modify both the database itself and all system tables created by Iotellect Server. |
Postgresql Cleanup
The recommended cleanup cycle for PostgreSQL DB is performing vacuumlo() every hour and vacuumdb() every day. It helps you keep optimal size of PostgreSQL DB:
#!/bin/bash
export PATH=$PATH:/usr/pgsql-9.4/bin/ export PATH=$PATH:/usr/pgsql-9.4/bin/
TIME=`date +'%Y%m%d-%H%M'`
echo $TIME" executing vacuumlo"
vacuumlo -U postgres -v -w password
TIME=`date +'%Y%m%d-%H%M'`
echo $TIME" stop vacuumlo"
#!/bin/bash
export PATH=$PATH:/usr/pgsql-9.4/bin/ export PATH=$PATH:/usr/pgsql-9.4/bin/
TIME=`date +'%Y%m%d-%H%M'`
echo $TIME" executing vacuumdb"
vacuumdb -fav -w
TIME=`date +'%Y%m%d-%H%M'`
echo $TIME" stop vacuumdb"
Postgresql Blob Cleanup
There are two possible variants to delete BLOB data from the database.
The first option is to use the following command:
vacuumlo -U <USER> <DATABASE>
You should know that during this process database performance will be reduced significantly.
The second one can be used on 24/7 systems. You need to create trigger function and a set of triggers for tables:
CREATE OR REPLACE FUNCTION "BlobDel"()
RETURNS trigger AS
$BODY$
BEGIN
delete from pg_catalog.pg_largeobject_metadata
where pg_catalog.pg_largeobject_metadata.oid = OLD.ag_data;
delete from pg_catalog.pg_largeobject
where pg_catalog.pg_largeobject.loid = OLD.ag_data;
return OLD;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION "BlobDel"()
OWNER TO postgres;
Which will be activated by trigger:
CREATE TRIGGER ag_info
BEFORE DELETE
ON ag_info
FOR EACH ROW
EXECUTE PROCEDURE "BlobDel"();
This trigger function will delete all orphaned BLOBs from pg.largeobject_metadata and pg.largeobject tables after clean operation on ag_info table. You can add triggers like this one to other tables of your database.
Was this page helpful?