top of page

Cloud performance and scalability. Azure Service Bus and Azure Durable functions - part 3

I have promised to complete this series of articles about Azure's durable functions by summarizing the tips describing how to set the configuration's parameters.



When setting these parameters pay attention to the following tips:

1)

  • Orchestration data stored in the History table includes output payloads from activity and sub-orchestrator functions.

  • Payloads from external events are also stored in the History table.

  • Because the full history is loaded into memory every time an orchestrator needs to execute, a large enough history can result in significant memory pressure on a given VM.

  • The length and size of the orchestration history can be reduced by splitting large orchestrations into multiple sub-orchestrations or by reducing the size of outputs returned by the activity and sub-orchestrator functions it calls.

  • Alternatively, you can reduce memory usage by lowering per-VM concurrency throttles to limit how many orchestrations are loaded into memory concurrently.

2)

  • The partitioning of the Instances table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale.

  • However, the number of instances can have a significant impact on multi-instance query performance.

  • To control the amount of data stored in these tables, consider periodically purging old instance data.

3)

  • Increasing the value for controlQueueBufferThreshold allows a single orchestration or entity to process events faster.

  • However, increasing this value can also result in higher memory usage.

  • The higher memory usage is partly due to pulling more messages off the queue and partly due to fetching more orchestration histories into memory.

  • Reducing the value for controlQueueBufferThreshold can therefore be an effective way to reduce memory usage.

4)

  • In some cases, you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the controlQueueBufferThreshold setting in host.json.

  • Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeuing messages from the Azure Storage control queues.

5)

  • In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data.

  • However, queues and tables have size limits that can prevent Durable Functions from persisting all of the required data into a storage row or queue message.

  • For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead.

  • To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead, pass around references to this data.

6)

  • The partitioning of the Instances table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale.

  • However, the number of instances can have a significant impact on multi-instance query performance.

  • To control the amount of data stored in these tables, consider periodically purging old instance data.


21 views0 comments

Recent Posts

See All

Distributed transactions in the Cloud - part II

Last year I worked on several Cloud integrations between various vendor applications used by an educational institution. These integrations make use of a disconnected architecture where a service bus

Distributed transactions in the Cloud - part I

One of the most common problem that a microservice architecture is facing is performing a transaction across multiple services. A distributed architecture is based on a number of small components, fun

  • Facebook profile
  • Twitter profile
  • LinkedIn Profile

©2020 by PlanetIT. Proudly created with Wix.com

bottom of page