I’ve been back on the road visiting file transfer customers and there’s growing concern out there about the ability to track and predict failure against defined service level agreements (SLAs). In general, I’m seeing most SLAs in our industry cleave to one or more of the following requirements:
1) Application Availability: Did our service meet the 99.xxx% goal we set? Most companies I’ve seen track this in minutes per month and year, and some track this by visibility to key customers. For example, if the file transfer srvice was unexpectedly down at 3am but only 15 customers would have noticed, can we count it as an outage for only those 15?
2) Round-trip Response Time: Does our service reliably return results from incoming submissions within X time? This is big at data centers that self-identify as “item processors” or have an “EDI/transmissions” group. This can also be further specified by class of customer or work (e.g., higher priority transactions) and time of day.
3) Expected Data Within Defined Transfer Window: Did we receive (or send) the “right” files during the transmissions window from X:XX to Y:YY? This one can be harder than it looks. First, you often have “right files” definitions that have dependencies on control or summary files plus specific file formats, names and sizes. Then there is the additional challenge of predicting which bundles are “running late” and the question of setting up warning alerts with 30 minutes or 15 minutes to go?
Even with these common requirements in the field, the nature of SLAs continues to evolve. As we see additional trends develop we’ll continue to note them – please expect more information in the coming months.