Resource Allocation Patterns
Resource Management is a very important part of Real-time and Embedded software design. This article discusses commonly used Resource Allocation Patterns. The discussion is divided into two parts:
Resource Allocation Algorithms
This section covers some of the resource allocation algorithms that are commonly encountered in Realtime systems. These algorithms are:
In hottest first resource allocation, the resource last released is allocated on next resource request. To implement this last in first out, LIFO type of allocation, the list of free resources is maintained as a stack. An allocation request is serviced by popping a free resource from the stack. When a resource is freed, it is pushed on the free resource list.
The disadvantage of this scheme is that there will be uneven utilization of resources. The resources at the top of the stack will be used all the time. If the resource allocation leads to wear and tear, the frequently allocated resources will experience a lot of wear and tear. This scheme would be primarily used in scenarios where allocating a resource involves considerable setup before use. With this technique, under light load only a few resources would be getting used, so other resources would be powered down or operated in low power mode.
In coldest first resource allocation, the resource not allocated for maximum time is allocated first. To implement this first in first out, FIFO type of allocation , the resource allocating entity keeps the free resources in a queue. A resource allocation request is serviced by removing a resource from the head of the queue. A freed resource is returned to the free list by adding it to the tail of the queue.
The main advantage of this scheme is that there is even utilization of resources. Also, freed resource does not get reused for quite a while, so inconsistencies in resource management can be easily resolved via audits.
In situations involving multiple resource groups, load balancing is used. A resource group is controlled by a local resource controller. In this technique, the resource allocator first determines the lightly loaded resource group. Then, the resource controller of the lightly loaded resource group performs the actual resource allocation. The main objective of resource allocations is to distribute the load evenly amongst resource controllers.
Consider a switching system. Here the system maintains a pool of tone recognition DSPs. Each DSP is capable of servicing a large number of calls. The resource allocator attempts to distribute the load between the DSPs by assigning the next call to a DSP with the least number of active calls.
Future Resource Booking
Here, each resource allocation is for a specified time. The resource allocation is only valid till the specified time is reached. When the specified time is reached, the resource is considered to be free. Thus the resource does not need to be freed explicitly. (This is actually quite similar to an advanced booking in a movie theater)
This technique is used in scenarios where a particular resource needs to be allocated for short duration to multiple entities in the future. When an allocation request is received, the booking status of the resource is searched to find the earliest time in future when the resource request can be serviced. Resource booking tables are updated with the start and end time of each resource allocation.
Distributed Resource Allocation
Most Realtime systems are distributed across multiple processors. Different techniques are used to manage resource allocation in such distributed systems. Some of these techniques are discussed below. They are:
- Centralized Resource Allocation
- Hierarchical Resource Allocation
- Bi-Directional Resource Allocation
- Random Access
Centralized Resource Allocation
In this technique, a centralized allocator keeps track of all the available resources. All entities send messages requesting resources and the allocator responds with the allocated resources. The main advantage of this scheme is simplicity. However, the disadvantage is that as the system size increases, the centralized allocator gets heavily loaded and becomes a bottleneck.
For example, in a switching system, the space slot allocation strategy uses this scheme. Space slot free-busy status is maintained at a central entity.
Hierarchical Resource Allocation
In this technique, the resource allocation is done in multiple steps. First, the centralized allocator takes the high level resource allocation decision. Then, it passes on the allocation request to the secondary allocator which takes the detailed resource allocation decision. This technique is explained by an example given below.
In a switching system, the trunk allocation is carried out in following steps.:
- The centralized allocator determines which trunk group should be used to route outgoing calls.
- The call request is then forwarded to the processor supporting the trunk group.
- The trunk group processor then selects the actual trunk from the trunk group.
The advantage of this scheme is that the centralized allocator is not burdened with the detailed resource allocation decisions. In this design, the detailed allocation decisions are taken by the trunk group processor. Thus this design scales very well when the number of trunk group processors in the switch is increased.
Note that this example shows only a two step resource allocation. This technique could be implemented in multiple levels of hierarchical resource allocations.
Bi-Directional Resource Allocation
This scheme allows two independent allocators to allocate the same set of resources. It is used in situations like bi-directional trunk groups. The switch at each end of the trunk group allocates the trunks from one specific end. This logic avoids both ends allocating the same resource under light and medium load. However, under heavy load, there is a possibility of both ends allocating the same resource. This situation is resolved by a fixed arbitration algorithm leading to one of the switches withdrawing its claim on the resource.
Consider the example of two switches A and B sharing a bi-directional trunk group with 32 trunks. Switch A searches for a free resource starting the search from from trunk number 1. Switch B searches for a free resource starting from trunk number 32. Most of the time there will be no clash in resource allocation. However, under heavy utilization of the trunk group, on occurrence of a clash for a trunk, the incoming call takes precedence over the outgoing call.
Whenever a resource needs to be shared between multiple entities which cannot synchronize to each other and they do not have access to a centralized allocator, designers have to resort to random access to the resource. Here all the entities needing the resource just assume that they have the resource and use it. If two or more entities use the resource at the same time, none of them gets service and all the entities retry again after a random back-off timer.
Consider the case of a mobile system, where subscribers that need to originate a call need to send a message across to the base station. This is achieved by using a random access channel. If two or more mobile terminals use the random access channel at the same time, the message is corrupted and all requesting terminals will timeout for a response from the base station and try again after a random back-off.
The main disadvantage of this technique is that the random access channel works well only when the overall contention for the random access channel is very small. As contention increases, hardly any resource requests are serviced.