Posted by: John Bresnahan | April 25, 2013

A Picture Can Beat 1000 Dead Horses

Unless this is your first time reading my blog, you are probably aware that I am beginning to become obsessed with the idea of a data transfer service.  In this post I continue the topic from my previous post by introducing a couple of diagrams.


A diagram of a possible swift deployment is on the right side.  On the left is a client to that service.  The swift deployment is very well managed, redundant and highly available.  The client speaks to the swift via a well defined REST API and using supported client side software to interpret the protocol.  However, between the server side network protocol interpreter and the client side network protocol interpreter is the wild west.

The wild west is completely unprotected and unmanaged. Many things can occur that cause a lost, slow, or disruptive transfer.  For example

  • Dropped connections
  • Congestion events
  • Network partitions

Such problems make data transfer expensive.  Ideally there would be a service to oversee the transfer.  Transfer could be check-pointed as they progress so that if a connection is dropped it could be restarted with minimal loss.  Also it could try to maximize the efficiency of the pathway between the source and the destination by tuning the protocols in use (like setting a good value for the TCP window), or using multicast protocols where appropriate (like bittorrent), or scheduling transfers so as to not shoot itself in the foot.

A safer architecture would look like this:


The transfer service is now in a position to manage the transfer thus it allows for the following:

  • A fire and forget asynchronous transfer request from the client.
  • Baby sit and checkpoint the transfer.  If it fails restart it from the last checkpoint.
  • Schedule transfer for optimal times.
  • Prioritize transfers and their use of the network.
  • Coalesce transfer requests and schedule appropriately and into multicast sessions.
  • Negotiate the best possible protocol between the two endpoints.
  • Verify that the data successfully is written to the destination storage system and verify its integrity.


  1. Your previous post mentioned Globus Online and GridFTP. I’m curious about what you think of these technologies, and how they relate to the kind of transfer service that you’re thinking of. What do they do well? What do you think they don’t do well, or don’t fit in this context?

    • In full disclosure, I worked on GridFTP as core developer since its beginning. It is very good at fast, greedy, point to point wide, secure, area transfers. It has been very successfully in the scientific domain. In general it is a great service, and it is open source. Some have found it a bit difficult to use and properly tune. Globus Online is SaaS (not open source). Its basic job is to manage GridFTP transfers.

      For the kind of transfer service I am referencing a driver (or plug in) for the above services could be made and leveraged for the use case where they make sense (wide area point to point transfer) but they would not be the only use case. For example, multi-cast protocols like bittorrent are important.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: