Downloading Made Easy

I found myself in an environment where I couldn’t easily set up a TCP connection between clients that I wanted to send an image between them. I didn’t want to just re-implement TCP because I wouldn’t have any other use for the reliability; The only thing I have to do and the only thing I want to do is send these images. So I came up with my own involuntary image transfer protocol.

How It Works

It’s really simple. The first thing the host does is fragment the file (a compressed version) into as many packets as it needs to. The host sends a packet to the client that contains a unique ID for the image and how many fragments the file has. Every RTT (you can measure it or just guess ~100ms) the client sends a packet filled with fragment IDs that it hasn’t gotten yet (which would be all of them at first). The host sends all fragments that have been requested to the client.

This protocol elegantly takes care of packet loss, out of order packets, high latency, and if you measure RTT it also has a form of congestion control. With a small extension to the protocol you could have multiple servers send fragments to the client which could service a Torrent like peer-to-peer network.

The best part is that this protocol is trivial to implement on top of UDP and that’s the main goal.

Is It Fast?

Kind of?

I want to say it’s slow because the way the client requests missing packets means that it’s receiving data in waves of fragments. This means it’s not receiving a steady stream of data which I would assume would be faster.

However, the bloat around TCP (specifically it’s sawtooth congestion control) is painful and since this protocol is built on UDP it could be faster on a high congestion network.

The real thing I need to do is just run some tests. So I’ve done so: (These are 50% done, will update as soon as possible)