Handling API rate limiting when making outbound API calls

Hi all

I was wondering if anyone has had experience of controlling the rate at which API calls can be made within Liberty Create, or has come up with an eloquent way of preventing rate limiting becoming an issue when making a significant amount of requests from Liberty Create using API calls.

The scenario is that we are using HR data brought in from our Civica HR & Payroll system to update various fields held on user accounts brought in from our iHasco Atlas system and then sending this updated user account data back to the iHasco Atlas system through API calls.

The iHasco client API uses a RESTful service that limits requests to 200 per minute. Updating a user account on the iHasco Atlas system requires making an API call from Liberty Create for each user, so, in practice, this involves making 500+ API calls if we want to ensure all these user accounts are kept up to date, i.e. hold current details about the member of staff’s job title, manager, work location, employment status, etc.

Initial testing reveals that the API calls from Liberty Create are happening too quickly, so the rate limit is soon exceeded and it is simply not possible to update all the required user accounts.

The response header from making a ‘PUT’ API call to the iHasco Atlas system does contain a remaining allowance value, but I’m not sure how that could be utilised to control when API calls should be paused/restarted, especially as each API call forms part of a sequence of Rules that are used to process each user account record held on Liberty Create.

Any ideas, anyone?

Thanks

Hi Stuart,

I did something similar recently and for your purpose it would have been something like the below.

You can create a Batch process via a regular interval triggered Signal using the properties below in a related ‘Master Object (1 to Many with users Object – need to relate this Master record with ALL Users in the first instance)

  • ‘Total Users’ – count of all Users.
  • ‘Total Processed Users’ initially set to zero. incremented by 1 after each API Call
  • ‘Total Batch Processed Users’ initially set to zero. – incremented by 1 after each API Call
  • ‘Max API Calls’ - set it to 199 or less.

On a User record add Boolean property “Data Updated by API?”.

Trigger the Event from the Master Object on the Users object and use the above to restrict what records it fires on, Rule to trigger API Call should have:

  • a Trigger constraint of ‘Total Processed Users’ < ‘Total Users.’
  • a Response constraint of ’ where “Total Batch Processed Users” <= ‘Max API Calls’ && “Data Updated by API? = No

After each API Call update the “Data Updated by API?” property of that User to Yes.

So for say 500 Users the first trigger it will run on 199 users and stop.

Next time Signal triggers the API Call it will work on the remaining Users and stop.

It won’t then run again.

To run it again you just reset the ‘Total Processed Users’ and ‘Total Batch Processed Users’ counts as well as the “Data Updated by API?” property on all Users.

Hi IanHR

Thanks for your feedback. Sorry for not replying earlier, but I’ve only just read your response.

An idea similar to this was also suggested by my work colleagues, so I’ve already taken the step (as also mentioned in your advice) and split the update of user data through API calls into batches of up to 150 requests at a time.

4 batches are used in all, each batch consisting of a Rule set to be triggered by a respective Signal at 6.30am, 6.35am, 6.40am, and 6.45am, respectively. Apart from some initialisation required when the first batch is run, each Rule effectively triggers the same sequence of Rules to iterate through Atlas user account details (records) held on Liberty Create. Note that each user account record is related to a single ‘Gateway’ record by a many-to-1 relationship and the ‘Gateway’ record is used to hold the value of the current number of requests made in a batch.

A Subset is used as a constraint in a Rule to limit the number of API calls that can be made for any one batch. Like your solution, this is based, in part, in keeping count of how many requests have already been made so far (the counter is reset on the start of each batch) and whether this is still less than 150. This counter is incremented whether an API call has been successful or has failed. Also, when a request is made a Boolean flag on the relevant user record is set and there is check for this in the Subset so that any records with this flag set are then excluded from needing to be considered on the next run through (‘batch’). For example, when the second batch is run 150 records will be excluded from being considered as needing updating; on the third batch this will be 300, and on the fourth batch 450 will be excluded. Note that prior to the first batch being run the Boolean flag on all Atlas user account records is reset to ensure that, initially, no records are excluded.

I’ve yet to test this out thoroughly, but in principle I think it should avoid potential issues with rate limiting. This solution does appear to be the only obvious approach to overcoming this problem, but does have the drawback that in using Signals you are having to set a specific time when the updates can occur (that is, it is not dynamic). For this situation this shouldn’t be an issue, though.

Thanks for your help.

Stuart