My Next Mission

Alt. Title: "Modern-day Archeology"

Preface and Musings

I love systems, and take what most people would probably consider an irrational amount of pleasure in rediscovering the cute little tricks and realizations that system designers who came before me cooked up for performance and other reasons. However, in this line of reading about old systems, I come across what can only be termed abandonware.

Abandonware is software which has been abandoned (or perhaps orphaned) either literally via the ravages of time and the deaths of good people or simply abandoned when the developer moved on to other and hopefully greater things. Some things, such as the language B (a predecessor of C) were abandoned simply because superior supersets were developed. Other projects such as Plan 9 were simply killed, and some like Ada simply never got traction or achieved wide acceptance and were killed by those products which did.

This is all old hat to anyone who has been in the software business for long. Products come and go, and life to some extent keeps moving on. The thing which I find fascinating and even haunting as I read about and explore these systems is to realize that they still work. The correctness of the system and utility of the tool for its intended task is not impacted by the passage of time.

The Use Case

I personally own and operate six devices that can be considered "full Linux hosts", that is they boot a 2.4.x or newer kernel and have network access via WiFi, cellular or 10/100/1000. To be specific, these are my Android phone, Android tablet, desktop, server (old laptop) and the ultrabook from which I now write. The ultrabook and my Desktop account for about 90% of my document work of a day, but when I pick up one I want to have access to all the data on the other regardless of internet connection, status of the other host and status of my server.

The only way to achieve this level of independence is for each host to retain a full and continually synchronized copy of all of my documents. However even after removing the Windows installation from my Ultrabook I still place a premium on disk space. Also the synchronization property is hard to ensure without a custom-built mess of SCP or RSYNC jobs designed to keep the two systems disks mirrors at all times.

When I worked at UT ICMB, and on the UT CS cluster, NFS shares are used to great effect for synchronizing user files as well as host configurations across dozens hundreds of machines. It's a nice solution which is highly effective. However it has the drawback that all the hosts involved need to have always-on internet connections and in the event of a network failure the disconnected client(s) have no local data to draw on.

While it is sub-optimal, this is also essentially the system which I operate, except that it is tunneled over an OpenVPN instance, and thus can provide a little more isolation, flexibility and the confidence with which I can make private files accessible via NFS.

The Fateful Words

As I contemplated this VPN build, three professors simultaneously suggested that I investigate Plan 9 as it provides not only many of the distributed file and backup features which I desire but also some really interesting distributed compute capabilities as well. So here goes nothing. One of my professors knows Eric Van Hensbergen, Plan 9 fiend and v9fs maintainer for the Linux kernel. So here goes nothing. I may have failed in my initial goal of bringing Plan 9 to JOS, but I may learn something yet. So here goes.

As soon as I can get my hands on a dedicated victim or get VMs running I intend to start toying with Plan 9 and hopefully will replace NFS with 9p once I understand what's going on under the hood. After all I hardly have an excuse, plan9port is a thing and I run Arch everywhere.