Ox: Some motivation

Here is some of my motivation for Oxlang. I'm sorry it isn't shorter, I gave up on copy editing to turn in. This was originally a personal email and appears almost unaltered.

Clojure is awesome. It is grammatically simple with a classical lispy syntax, and its cheap immutable data structures enable equational reasoning about most problems and solution tools.

My top bugbears & goals:

  1. Contributor culture. The contribution process should be as open, friendly and low friction as possible see Guy Steele's comments, on growing a better language from one with warts through acceptance of contributions. The GHC team has a "3% rule", in that if a change improves the language or the type checker and comes at a cost of less than a 3% slowdown it has a high probability of being accepted and acceptance/rejection is publicly debated ala LKML. Commentary on the Clojure process may be inferred.

  2. Formalism. Formalism gives portabity and is the only real rout thereto. Parser spec. Evaluator spec. Kernel standard library spec with a features list. This enables portability of code and data structures. Keeps us out of the JVM centric mess that Clojure itself is close to right now.

  3. Documentation. I built Grimoire and will say no more on this subject. Goes with formalism requirements.

  4. Hide the host. That formalism should include the call stack and exceptions. Use the host under the hood sure, but don't leak the host for the most part. To paraphrase Paul Phillips, Just because Java or Clojure jumped off a bridge doesn't mean we can't have nice things like Data.Ord. This likely means making interop deliberately second class. Possible even explicitly generated from introspection but not the common case. Musings on this may be found here.

  5. Types (and protocols/interfaces) matter. Clojure and much of the Cognitect crew seem to pretend that they don't. It feels like not a week goes by without someone in IRC or twitter posting a function that checks the run time type of some value against some implementation type of Clojure's core. Aphyr obligingly posted today's, clojure.core.incubator/sequable, I have a few. Enable static reasoning rather than pretending it doesn't matter and that nobody does it. Bake in type inference and something akin to prismatic/schema.

  6. Purify the language implementation. Practice what you preach. Oxlang/Kiss's immutable environments absolutely required. Death to &env, *ns* and mutable namespaces, do reloading by dependency tracking.

  7. Give thought to being self-hosting. It's not essential off the bat, but it makes porting & compatability easier in the long run. GHC's bootstrap & rebuild itself process beccons and Toccata plays the same trick.

  8. Give more thought to the module system & versioning culture. I want to be able to depend on "something implementing these functions in the namespace clojure.core then require those and alias them in as such", not "Dear lein, this exact version of this library, thanks no ranges can't trust anyone. Okay lets just load crap up". The language should enforce versioning b/c programmers sure as hell can't be trusted with it. Note that this goes with my first point to lower the cost(s) of churn in the standard library.

  9. Tooling. I'm with Chas in his comments here. As hinted in 7, the language needs to provide as much support as possible for editors, for REPLs, for packages, for optimizing, for testing. These are the things that make or break the UX of the language and if the UX isn't good people will just go back to writing node.js or whatever the latest flavor of the week is. Tooling is hard and takes time, but it must be a goal.

All of these (save the parser and evaluator spec) are goals towards which work can be accumulated over time once the semantics of the language are agreed upon. Tooling, types, speed, correctness analysis can all be done if changes (growth) is cheap and the goals are agreed upon.

I think that given these changes it would be pretty easy to design embedded and memory restricted targets for "Clojure", and I'll admit that the goal of writing an OS with this language is near to my heart but for the most part I want to improve the platform independent experience since most of the code I write isn't platform bound it's exploratory scratch work.


Oxcart going forwards

When I last wrote about Oxcart work pretty much went on hiatus due to my return to school. As there has been some recent interest in the status of Lean Clojure overall I thought I'd take the opportunity to review the state of Oxcart and the plan for Oxcart going forwards.

Oxcart and Clojure

The static linking approach and lack of run time dynamism found in Oxcart is explicitly at odds with the philosophy of core Clojure. Where Clojure was designed to enable live development and makes performance sacrifices to enable such development as discussed here, Oxcart attempts to offer the complement set of trade offs. Oxcart is intended as a pre-deployment static compiler designed to take a working application and to the greatest extent possible wring more performance out of unchanged (but restricted) Clojure as PyPy does for Python. As Oxcart explicitly avoids the dynamic bindings which Clojure embraces, Alex Miller, the Clojure Community Manager, has repeatedly stated that he expects to see little cross pollination from Oxcart and related work to Clojure itself.

This would be all well and good, were it not for the existing behavior of Clojure's clojure.lang.RT class. As currently implemented in Clojure 1.6 and 1.7, RT uses its <initc> method to compile the following resources with clojure.lang.Compiler.

  • "clojure/core"
  • "clojure/core_proxy"
  • "clojure/core_print"
  • "clojure/genclass"
  • "clojure/core_deftype"
  • "clojure/core/protocols"
  • "clojure/gvec"
  • "clojure/instant"
  • "clojure/uuid"

These represent about 10799 lines of code, all of which could easily be statically compiled and most importantly tree shaken ahead of time by Oxcart or another tool rather than being loaded at boot time. This also means that the unavoidable booting of Clojure itself from source can easily dominate loading user programs especially after static compilation to raw classes. A quick benchmark on my machine shows that booting a Clojure 1.6 instance, loading a ns containing only a -main that only prints "hello world" takes ~2.8 seconds from source compared to ~2.5 seconds booting the same program compiled with Oxcart suggesting that the cost of booting Clojure is the shared ~2.5 second boot time. This is the test.hello benchmark in Oxcart's demos.

$ git clone git@github.com:oxlang/oxcart.git &&\
  cd oxcart &&\
  git checkout 0.1.2 &&\
  bash bench.sh test.hello
Running Clojure 1.6.0 compiled test.hello....
Hello, World!

real    0m1.369s
user    0m3.117s
sys     0m0.083s
Oxcart compiling test.hello....
Running Oxcart compiled test.hello....
Hello, World!

real    0m1.212s
user    0m2.487s
sys     0m0.073s

Then there's the test.load benchmark. This benchmark as-is pushes credulity because it compiles 502 functions of which only the -main which uses none of the other 501 will be invoked. This reflects more on program loading time than on the loading time of "clojure/core", but I still think instructive in the costs of boot time compilation, showing a ~7s boot time for Clojure compared to a ~2.5s boot time for Oxcart. As arbitrary slowdowns from macroexpansions which Thread/sleep would be entirely possible I consider this program within the bounds of "fairness".

A Fork in the Road

There are two solutions to this limitation, and both of them involve changing the behavior of Clojure itself. The first is my proposed lib-clojure refactor. Partitioning Clojure is a bit extreme, and in toying with the proposed RTUtil changes over here I've found that they work quite nicely even with a monolithic Clojure artifact. Unfortunately there seems to be little interest from Clojure's Core team (as judged via Alex's communications over the last few months) in these specific changes or in the static compilation approach to reducing the deployment overhead of Clojure programs. The second is to fork Clojure and then make lib-clojure changes which solves the problem of convincing Core that lib-clojure is a good idea but brings its own suite of problems.

Oxcart was intended to be my undergraduate thesis work. While the 16-25% speedup previously reported is impressive, Oxcart does nothing novel or even interesting under the hood. It only performs four real program transformations: lambda lifting, two kinds of static call site linking and tree shaking. While I suppose impressive for an undergrad, this project also leaves a lot on the table in terms of potential utility due to its inability to alter RT's unfortunate loading behavior. I also think there is low hanging fruit in doing unreachable form elimination and effect analysis, probably enough that Oxcart as-is would not be "complete" even were its emitter more stable.

I'm reluctant to simply fork Clojure, mainly because I don't think that the changes I've been kicking about for lib-clojure actually add anything to Clojure as a language. If I were to fork Clojure, it'd be for Oxlang which actually seeks to make major changes to Clojure not just tweak some plumbing. But writing a language so I can write a compiler is frankly silly so that's not high on the options list. The worst part of this is that forking Clojure makes everything about using Oxcart harder. Now you have dependencies at build time (all of "stock" Clojure) that don't exist at deployment time (my "hacked" Clojure). Whatever hack that requires either winds up complicating everyone's project.clj or in an otherwise uncalled for leiningen plugin just like lein-skummet. Tooling needs to be able to get around this too when every library you'd want to use explicitly requires [org.clojure/clojure ...] which totally goes away once Oxcart emits the bits you need and throws the rest out. Most of all I don't want to maintain a fork for feature parity as time goes on. However I also don't see any other a way to get around RT's existing behavior since the RTUtil refactor touches almost every java file in Clojure.

Flaws in the Stone

Oxcart itself also needs a bunch of work. While I think that Nicola has done an awesome job with tools.analyzer and tools.emitter.jvm I'm presently convinced that while it's fine for a naive emitter (what TEJVM is), it's a sub-optimal substrate for a whole program representation and for whole program transforms.

Consider renaming a local symbol. In the LLVM compiler infrastructure, "locals" and other program entities are represented as mutable nodes to which references are held by clients (say call sites or use sites). A rename is then simply an update in place on the node to be changed. All clients see the change with no change in state. This makes replacements, renames and so forth constant time updates. Unfortunately due to the program model used by tools.analyzer and tools.emitter.jvm, such efficient updates are not possible. Instead most rewrites degenerate into worst case traversals of the entire program AST when they could be much more limited in scope. Cutaway is one experiment in this direction, but it at best approximates what clojure.core.logic.pldb is capable of. I hope that over Christmas I'll have time to play with using pldb to store, search and rewrite a "flattened" form of tools.analyzer ASTs.

Oxcart is out of date with tools.emitter.jvm and tools.analyzer. This shouldn't be hard to fix, but I just haven't kept up with Nicola's ongoing work over the course of the last semester. This will probably get done over Christmas as well.

Oxcart doesn't support a bunch of stuff. As of right now, defmulti, defmethod, deftype, defprotocol, proxy, extend-type and extend-protocol aren't supported. I'm pretty sure all of these actually work, or could easily work, they just didn't get done in the GSoC time frame.

Finally and I think this is the thing that's really blocking me from working on Oxcart: it can't compile clojure.core anyway. This is a huge failing on my part in terms of emitter completeness, but it's a moot point because even if I can compile clojure.core with Oxcart RT is gonna load it anyway at boot time. I also suspect that this is an incompleteness in the project as a whole which probably makes it an unacceptable thesis submission although I haven't spoken with my adviser about it yet.

The Endgame

As of right now I think it's fair to call Oxcart abandoned. I don't think it's a worthwhile investment of my time to build and maintain a language fork that doesn't have to be a fork. I talked with Alexander, one of the clojure-android developers and a fellow GSoC Lean Clojure student/researcher about this stuff and the agreement we reached was that until 1.7 is released there's no way that the lib-clojure changes will even get considered and that the most productive thing we can do as community members is probably to wait for 1.8 planning and then try to sell lib-clojure and related cleanup work on the basis of enabling clojure-android and lean clojure/Oxcart. Practically speaking in terms of my time however, if it's going to be a few months until 1.7 and then a year until 1.8, that only gives leaves me my last semester of college to work on Oxcart against an official version of Clojure that can really support it. If that's what it takes to do Oxcart I'll likely just find a different thesis project or plan on graduating without a thesis.

  That said, serious interest in Oxcart as a deployment tool or another
  contributor would probably be enough to push me over the futility hump
  of dealing with a Clojure fork and get Oxcart rolling.)


The Future of the LispM

This is also addressed to @lokikil towards whom I have threatened to write this post on multiple occasions. It is expected to piss off loper-os. Above all, it represents my opinions backed up only by a cursory research effort. You have been warned.

Background Of Historical LispMs

LispMs occupy a transitional period of computing industry history between time shared mainframes and the single user workstations that would become today's desktop computers. Prior to John McCarthy's invention of time sharing, all computers had been single program machines which multiple users shared via batch scheduling. The introduction of time sharing opened the way to exploring how to make computers useful interactively for many users sitting at terminals. Under bach processing, users would present fully formed programs written by hand or with little tool assistance to computer techs who would run the programs and return printed output.

The concept of time sharing and interactivity lead to the development of more dynamic programming environments including the Read Eval Print Loop. However at the same time, the falling costs of integrated circuits were beginning to put the first workstations, single user machines featuring large memories, within the financial reach of well funded AI laboratories which previously dependent on time shared mainframes.

In order to improve interactivity and performance, Richard Greenblatt and Thomas Knight developed the cons machine, the first of the MIT Lisp machines. Priced at ~$50,000 in 1970s dollars cons and the successor cadr machine were single user workstations built primarily it seems to address the restrictions on memory and memory performance and subsequent detrimental impact on general program performance faced on time shared systems due to the large memory footprint of Lisp programs. Hardware/microcode support was added for maintaining the relatively complex Lisp environment structures, and for many of the primitive lookup/load/store operations on cons cell structures that characterize a traditional Lisp implementation's data access pattern. This combination of machine support and microcoding enabled Lisp "compilers" to be far simpler and offloaded the responsibility for maintaining complicated structures such as the environment from the compiler to the microcode system. Best of all as the microcode closely corresponded relatively directly to Lisp, on the rare occasions that truly writing microcode for say device drivers was called for doing so was easy because users could transparently invoke and define microcode from their native Lisp environment.

Seeing a business opportunity, in the early '80s a number of MIT AI lab researchers departed and founded Symbolics Inc. and Lisp Machines Inc. for the purpose of building and selling Knight machine derived Lisp workstations. Ultimately however, Lisp machines did fall out of favor during the AI winter(s) and lost to what we now call the personal computer. I have seen the argument made that they were too big-ticket and missed the mass market as a result. Thus our story really begins.

Of Modern Hardware

In the years since the demise of the dedicated LispMs, Intel and the other major chip manufacturers have progressed from single cycle to pipelined, cache accelerated, superscalar and out of order machines. Knowledge of these designs is assumed. I suggest my earlier introduction series to these topics. At a high level however, each step of this progression trades off chip area, transistors (complexity) and power budget to wring more instruction level parallelism out of existing programs.

The Symbolics 3600 technical report from '83 claims a best case cycle time of 180ns (~5MHz) and a worst case of 250ns (4MHz). Arbitrary reads from main memory are priced at 600ns (3x the cost of a single instruction). This low (by modern standards) slowdown between the processor and memory meant that one could write programs that did lots and lots of random memory access and still have reasonable performance expectations. However, as a stack machine architecture, there is no opportunity for the instruction level parallelism exploited by modern architectures as the exact state of the stack which is depended on by every instruction changes with every instruction. This forces all program execution to wait during slow operations like main memory reads, division and floating point math due to the sequential data dependency on the stack. This is a fundamental architectural failure common to all stack machines and skirted by modern software stack machines like the JVM by compiling literal stack instructions to a register machine in order to recover instruction level parallelism.

Modern processors claim instruction times of 0.3̅ns for a 3GHz machine, often running multiple instructions per cycle. Unfortunately while processors core clocks have gotten faster since the days of the 3600, memories have not fundamentally improved. On modern hardware, 100ns for a random main memory read seems to be more or less normal. This represents a 27x speedup in processor speed mismatched with a 6x speedup in main memory latency for an effective 4.5x slowdown on memory reads. Now this is admittedly worst case memory latency. Thanks to the L1 and L2 cache layers, "memory" read times of 1ms to 3ms are not uncommon meaning that for cache efficient programs it is quite reasonable to expect less than a 2ns or 6 cycles for memory reads hitting in the L1 or L2 cache.

I mention this, because techniques like cdr coding serve to bring the average case of list access and construction down from their worst case behavior of generating massive heap fragmentation (and thus defeating caching) towards the behavior of nicely caching sequential data layouts (classical arrays) typically via tree like structures or SRFI-101. While traditional linked list structures due to potential heap fragmentation provide potentially atrocious cache locality and this cripple the performance of modern cache accelerated machines, more modern datastructures can simultaneously provide the traditional cons list pattern while improving cache locality and thus performance characteristics. Guy Steele himself has even come out arguing this point.

Without literal implementation as a stack machine traversing linked lists and providing special hardware support for binding and soforth, the LispM instantly looses much of the vaunted simplicity which makes it attractive to program directly and becomes simply another register machine in the mold of modern ARM or Intel machines with strange long running microcoded instructions reminiscent more of the VAXen than of modern RISC inspired processors. In short due to the many performance limitations of linked lists as data structures we as an industry will never again build machines as the original LispMs were for the sole purpose of traversing and manipulating linked list data structures. Modern hardware simply offers better performance with more general applicability.

Of Modern Languages

We've also come a long way in language design and implementation. Compilers, once slow, have gotten faster and smarter. Virtual machines like the JVM, JavaScript and the CLR are becoming widely used deployment targets. The ML and Haskell families of languages have introduced us to concepts of real abstract types and abstract effects which can be used to build programs coupled only by the abstract properties of the data being consumed, generated and effects produced. Type inference is even making such fancy behavior manageable by mere mortals, while providing language implementations with more and more information with which to perform both program level optimization and micro-optimization not possible in traditional naive lisps.

Of LispMs Future

While we'll never build a hardware LispM again, I suspect that we will see LispM like systems return one day in the not too distant future. Not as hardware, but as software or a virtual machine designed to run atop existing and entirely adequate modern hardware. Now in 10 years someone may make a commercial hardware LispM at which point I will be happy to eat my words, but as of right now I don't see it happening.

The JVM and the .net CLR have proven to be amazing platforms. While their allocation requirements perhaps prohibit their use for implementing operating systems and driver code (not that people haven't tried this sort of thing) they do offer excellent and standardized platforms. It is my personal belief that, by leveraging the flexibility that Lisp dialects have for both data driven DSLs and macro DSLs that it would be more or less trivial to implement an operating system using a DSL for generating platform specific assembly. As the "kernel" OS is implemented in the final "host" LispM language editing the kernel is possible albeit difficult due to the inherent difficulties of hot swapping operating systems.

Add a JIT (no this isn't easy), preferably implemented in the same assembler DSL as the host operating system, and the stage is set for building a LispM environment as a virtual machine running atop modern hardware and using modern JIT to achieve reasonable performance characteristics. From this vantage the way is clear to implementing the rest of the operating system and user land in a fully fledged lisp with good reloading and introspection support atop this kernel of a JIT and enough of an OS to provide memory and process management.

This is, I think, an entirely reasonable and practical project for a smallish (~3 or fewer) team and a single target architecture (Intel). There was a project, Dream, an r4rs x86 assembler dsl in which an r4rs interpreter and operating system were implemented. It booted and worked more or less fine, and while it lacked polish I think it serves as an excellent proof of concept that such a self hosting Lisp (assembler os runtime) triple is viable. I don't argue that such an implementation will be trivially portable to a wide variety of target hardware because as experience shows porting nominally portable Linux, FreeBSD or FreeRTOS is actually quite hard and porting a JIT can at best be as hard due to tight coupling with the specific behavior of the target machine but this is why we specify an abstract machine and then let implementations deal with the mess as the JVM does.

I also think that Lisp as a language family could stand to learn some new tricks if someone were to make the effort to build a full Lispy OS/VM thing.

Clojure is awesome. I've written a boatload of it. It's an amazingly simple and elegant tool, and its persistent/immutable datastructures are absolutely worth stealing as is its concept of hygenic macros via symbol qualification and probably the concept of protocols/interfaces which Clojure kinda inherits from its primary host Java.

Racket is also awesome although I can't claim to have written a bunch of it. Its approach to static compilation, strong support for documentation and examples via Scribble, its pattern matching and typechecking facilities are totally worth stealing.

Shen is interesting if only due to its built in Prolog engine enabling search for arbitrary proofs with regards to arbitrary program properties. While criticized by the Haskell community for disregarding completeness being able to write and search for proofs with regards to arbitrary properties of programs not just types is I think an amazingly powerful one albeit an active research area.

But is it a LispM?

There are some people who will rail against this vision of mine that the "OS" is as low as we need to try and push a language. To my mind, the advantages of pushing a language to the hardware level are scant enough as argued above that it simply does not justify the investment or the delay. Even the OS may even be too low to push a language and that while the adventure of building an OS with a language to host the language ala Oberon because while integrating the language with the OS does offer maximal conceptual unity across the combined platform and provide a huge leap in ease of integrating pipelines of different applications it also forcibly discards programs not written in HOST_LANG for HOST_PLATFORM. This is a problem if only because the Unix model of computing as implemented on Linux with the help of the GNU user land is second only to the mach hiding inside of Mac OS X in terms of apparent developer adoption. Developers booting the next Dream for the first time won't find familiar text editors or tools. It will be a brave new world.

Maybe that's a good thing. Maybe we can escape the legacy of Unix with untyped byte streams and instead revive some of Plan 9 with a basis in lispy goodness and with more types everywhere. Maybe there's even a place in this for algebraic effects. Maybe we can join urbit in looking towards a functional, fully netwoked future. Maybe. If only we can look past the obsolete machines and operating systems of yesteryear with which we seem content to circlejerk and move forwards with appropriate context and vision.

Or maybe we'll just be stuck with Linux, Unix, OpenSSL and the rest of our somewhat suspect tower of legacy code out of sheer inertia and continue getting caught up in language of the month rages out of collective ADD.

 We have not wings, we cannot soar;
       But we have feet to scale and climb
 By slow degrees, by more and more,
       The cloudy summits of our time.

 The mighty pyramids of stone
       That wedge-like cleave the desert airs,
 When nearer seen, and better known,
       Are but gigantic flights of stairs.

 The distant mountains, that uprear
       Their solid bastions to the skies,
 Are crossed by pathways, that appear
       As we to higher levels rise.

 The heights by great men reached and kept
       Were not attained by sudden flight,
 But they, while their companions slept,
       Were toiling upward in the night.

~ Longfellow


Syscall overhead and VMs

Recently, a friend of mine at ARM has been kicking around the idea that syscalls are inefficient and that a more than reasonable speedup could be achieved across the board in *nix operating systems by providing a mechanism for reducing the overhead of syscalls.

This friend of mine is a bit of a joker, but the story he tells to motivate his idea is a good one.

Say I have a yard full of logs, and I want them chopped. So I walk out, find a guy with an axe, tell him that I have somew work for him and we come to an agreement on the price. So he goes out back, chops a log, I pay him, and just as he's about to step off the curb and walk away I grab him, and mention that I have another log for him to chop on the same terms. When he finally gets home to his wife, who asks him how his day went, the log chopper goes "I worked for this crazy guy who had a yard full of wood to chop but told me what to do one log at a time".

Unfortunately, this does indeed represent the state of syscalls in most unix based operating systems. There isn't a command for saying "stat everything in this directory" for instance, although this is precisely the implementation of the ls command. Instead, you stat the target directory, yielding an array of files, which you then sequentially stat to get individual file properties. This is all well and good, until you realize that in doing this you've incurred O(N) system call context switches. It has to be O(N) on the number of files, because that's what ls does, but the system call context switches are pure overhead.

My response to this idea was simply that what you really want to do is treat the system calls of the OS as its own virtual machine. A system call as currently envisioned is simply a single op against this standardized interface, with the system interrupt/context switch overhead implicit in executing that single op. Consequently, what we really want to do is not make a context switch into the OS with a single op in mind, instead we want a bytecode VM, Turing complete or otherwise, that represents the syscall interface as an interpreter or a JIT. Making a syscall consequently becomes an exercise in generating a bytecode program representing all the work you can do in one go in the context of system call privilage. So take the ls example. Rather than making N+1 stat calls, instead our "fast ls" could simply generate a bytecode program that would do all N+1 syscall ops in a single go by representing the function

(fn [d :- Dir]
  (->> (list d)
       (map stat)))

This gets really fun when you start thinking about what this means in the context of an operating system where the bytecode machine exposed by the OS for syscalls is the same as the bytecode machine exposed for general programming. Suddenly, you get the OS as almost an exokernel that you can call out to with lambda functions just as if it were nonprivilaged code. Given a JIT this really gets fun because as a program must start running at the highest privilage level it will ever require, it must be safe to hoist the entire program to that level. Consequently a JIT could actually inline out the syscall privilage check, performing it once at program start and then simply running the entire program with OS privilages in OS context. Clearly this works better for a managed language VM than for a hardware language VM with pointer crafting :P.

Interestingly, it turns out that this is not a new idea. Apparently some old IBM research machines actually had a full interpreting stack machine baked into the OS kernel to do exactly this, but I haven't been able to track down public information on such an operating system.

牛: the environment model

Disclaimer: Oxlang is vaporware. It may exist some day, there is some code, however it is just my thought experiment at polishing out some aspects of Clojure I consider warts by starting from a tabula rasa. The following represents a mostly baked scheme for implementing ns and require more nicely in static (CLJS, Oxcart) rather than dynamic (Clojure) contexts.

Unlike Clojure in which the unit of compilation is a single form, Oxlang's unit of compilation is that of a "namespace", or a single file. Oxlang namespaces are roughly equivalent to Haskell modules in that they are a comprised of a "header", followed by a sequence of declarative body forms.

In Clojure, the ns form serves to imperatively create and initialize a namespace and binding scope. This is done by constructing a new anonymous function, using it as a class loader context to perform cached compilation of depended namespaces. Subsequent forms are compiled as they occur and the results are accumulated as globally visible defs.

Recompiling or reloading a file does exactly that. The ns form is re-executed, incurring more side-effects, and all forms in the file are re-evaluated generating more defs. However this does not discard the old defs from the same file, nor purge the existing aliases and refers in the reloaded namespace. This can lead to interesting bugs where changes in imports and defs create name conflicts with the previous imports and cause reloading to fail. The failure to invalidate deleted defs also creates conditions where for instance during refactorings the old name for a function remains interred and accessible the program run time allowing evaluation of code which depends on the old name to succeed until the entire program is reloaded in a fresh run time at which point the missing name will become evident as a dependency fault.

Furthermore, the Var mechanism serves to enable extremely cheap code reloading because all bindings are dynamically resolved anyway. This means that there is exactly zero recompilation cost to new code beyond compilation of the new code itself since the Var look up operation is performed at invoke time rather than at assemble time.

Unfortunately in my Clojure development experience, the persistence of deleted symbols resulted in more broken builds than I care to admit. Building and maintaining a dependency graph between symbols is computationally inexpensive, is a key part of many language level analyses for program optimization and here critically provides better assurance that REPL development behavior is identical to program behavior in a cold program boot context.

In order to combat these issues, two changes must be made. First, re-evaluating a ns form must yield a "fresh" environment that cannot be tainted by previous imports and bindings. This resolves the import naming conflict issues by making them impossible. By modeling a "namespace" as a concrete "module" value having dependencies, public functions and private functions we can mirror the imperative semantics enabled by Clojure's defs and Vars simply by accumulating "definitions" into the "module" as they are compiled.

This model isn't a total gain however due to the second change, that reloading entirely (and deliberately) invalidates the previous definitions of every symbol in the reloaded namespace by swapping out the old namespace definition for the new one. This implies that other namespaces/modules which depend on a reloaded module must themselves be reloaded in topological sort order once the new dependencies are ready requiring dependency tracking and reloading infrastructure far beyond Clojure's (none). Naively this must take place on a file by file basis as in Scala, however by tracking file change time stamps of source files and the hash codes of individual def forms a reloading environment can prove at little cost that no semantic change has taken place and incur the minimum change cost. I note here the effectiveness of GHCI at enabling interactive development under equivalent per-file reloading conditions as evidence that this model is in fact viable for enabling the interactive work flow that we associate with Clojure development.

With "namespaces" represented as concrete immutable values, we can now define namespace manipulation operations such as require and def in terms of functions which update the "current" namespace as a first class value. A def when evaluated simply takes a namespace and returns a new namespace that "happens" to contain a new def. However the work performed is potentially arbitrary. refer, the linking part of require, can now be implemented as a function which takes some enumeration of the symbols in some other namespace and the "current" environment, then returns a "new" environment representing the "current" environment with the appropriate aliases installed.

This becomes interesting because it means that the return value of load need lot be the eval result of the last form in the target file, it can instead be the namespace value representing the final state of the loaded module. Now, given a caching/memoized load (which is require), we can talk about an "egalitarian" loading system where user defined loading paths are possible because refer only needs the "current" namespace, a "source" namespace and a spec. Any function could generate a "namespace" value, including one which happens to perform loading of an arbitrary file as computed by the user. See technomancy's egalitarian ns for enabling the hosting of multiple versions of a single lib simultaneously in a single Clojure instance is one possible application of this behavior.

It is my hope that by taking this approach the implementation of namespaces and code loading can be simplified greatly however one advantage of the Var structure is that it enables forwards and out of order declarations which is immensely useful while bootstrapping a language run time ex nihilo, as done here in the Clojure core. ns itself must exist in the "empty" namespace, otherwise as the "empty" namespace is used to analyze the first form in a file (stateless (abstractly) compiler ftw) the ns form itself will not be resolved and no program can be constructed. Ox could follow Clojure's lead and cheat by not implementing ns in Ox but rather bootstrapping it from Java or Clojure or whatever the implementing language turns out to be but I'd like to do better than that. This is a problem I haven't solved yet.