“Why are we still FTP’ing files to each other in 2010?”
That is one of the philosophical questions I get to ponder almost once a week as I chat with my colleagues in the industry. Part of the answer is easy: “Almost everyone has or knows about FTP.” Based on that answer, a number of secure variants on FTP (SFTP, FTPS, even our own command-line MOVEit Xfer client) have emerged, along with extensions to the core FTP command set itself.
But why bother moving FILES around when we could all be doing little bitty TRANSACTIONS to each other using SOAP or other transactional-friendly schemes? The answer to that question didn’t come to me until I’d spent several years in the field, traveling between banks, data centers and large corporations in support of distributed, enterprise-class file transfers.
In the 1990’s the local branch of your bank worked something like this. At the end of every business day, after all the customers had left, the tellers would compare the cash in their drawers against what the accumulated transactions of the day on the computer said should be there. During this reconciliation process, adjustments might be made to the record of the day to explain the discrepancies – essentially adding extra transactions after the bank was closed. However, these transactions often did NOT occur in real time. Instead, after all balancing was done and local management was satisfied with the result, a fixed set of files with the branch bank’s “final answer” was sent in to the home office, and everyone went home for the night.
So why did/do bank use files for this workflow instead of transactions? Why did their operations experts only ask branches to send in a single set of files?
- It hid the complexity of the bank’s central systems from branches. Branch managers didn’t have to worry about this to this system and that to that system, each with it’s own error codes: they just sent the files and went home.
- It was less risky for the branch managers and their staff. Branch managers didn’t have to worry about a misbehaving back-end system keeping their tellers on for an extra hour: they just sent the files and went home.
- It let central management put faith in the numbers. When a branch sent in its final report, central management knew that its numbers had undergone local verification, and that its numbers were not going to be superceded by any “last minute” transactions.
Boiled down, the reasons large FILE transfer was used in this interaction (instead of small TRANSACTIONS) was to hide the complexity of systems on both ends, reduce the risk of transmission failure and to increase the fidelity of the overall operation. Whenever you find similar “do good work, certify it and throw it over the wall” workflows in business processes, the opportunity to solve those workflows with secure and reliable file transfer usually exists.
(Will file transfer and transaction-based architectures ever converge? I think they already have begun to – look for more on that in future posts!)