Bindings Performance
Model and widget bindings are processed by a pool of threads working in parallel. Sometimes the pool (which has user-defined settings) is not large enough and new event-driven or periodic binding processing request arrival rate is higher than processing rate.
In this case new binding executions get rejected. Once this happens, a warning Info event is generated in the model context. Those events are then generated upon other rejections, however their rate is limited to avoid further performance degradation effect.
Thus, if a model seems to behave incorrectly and miss binding executions, it's recommended to run Monitor Related Events action from its context and check its event log for binding rejection warnings.
Periodic Bindings Performance
Periodic bindings should not be used in most cases, on-event bindings should be used instead. If an on-event binding's expression refers some variables, the system is in most cases smart enough to detect that one of those values was changed and binding re-calculation is required. Periodic binding should be only used if an on-event binding does not react to referred value changes.
Benefits of on-event bindings against periodic bindings:
- Lower CPU, disk and network I/O usage since calculations are only performed upon actual value changes.
- Immediate reaction to values changes. Periodic bindings will only reflect those changes in the end of period. Decreasing the evaluation period leads to higher resource usage.
Binding Concurrency
Bindings are normally executed in a thread pool that defines how many parallel (concurrent) tasks can be running simultaneously.
The pool is configured by three settings:
- Normal Concurrent Bindings. This setting defines the base pool size, i.e. how many parallel tasks can be running under normal circumstances. If more than this number of tasks should start execution, further tasks are getting queued for later execution once execution threads get free.
- Maximum Unprocessed Binding Queue Length. That's the maximum number of binding processing tasks that can get queues. If queue will get full, additional binding processing threads will be created in the pool until the Maximum Concurrent Bindings limit is reached.
- Maximum Concurrent Bindings. That's the absolute maximum number of threads in a binding processing pool. If this maximum number is reached and there are even more tasks to be executed right now, their execution will either fail or block until the pool gets some free resources.
In other words, the pool works in the following way:
- First binding processing tasks are processed by threads of the basic pool, those number is limited by Normal Concurrent Bindings
- If all those threads are busy, new binding processing tasks are queued until the queue length reaches Maximum Unprocessed Binding Queue Length
- If the queue is full, new threads are created in the pool, until the total thread number reaches Maximum Concurrent Bindings
- If all threads in the fully extended pool are busy, tasks are either rejected or further delayed, depending to the context
Thread Number Considerations
Threads that process bindings are normally quite expensive to create and maintain in terms of system resources. Each thread reserves 128 to 512 kilobytes of RAM for its internal stack and consumes a certain amount of CPU resources for context switching, i.e. switching CPU cores between different threads.
Thus, number of threads should be generally maintained low. A maximum practical number of threads for a modern server-grade machine is around 10-20 thousand, for a workstation-grade machine it is 5-10 thousand. However, the total number of threads in a highly loaded Iotellect Server should be generally kept under 1000-1500, the number of threads in Iotellect Client that runs many widgets should be kept under 300-500.
Was this page helpful?