Strictly Tagged Clojure

This summer I've been an intern at Factual, and this is an experience report from the semiannual internal hackathon where Alan 'amalloy' Malloy and I experimented with using Alexander Yakushev's Skummet fork of Clojure to emit lean(er) bytecode.

Some motivation

One of Clojure's primary use cases is as a more palatable tool with which to interact with the rich Java ecosystem and existing Java libraries. Because of its facilities for such inter-operation, Clojure is even sometimes used to write performance sensitive code which would otherwise be written in Java. However there are limitations to the success with which this may be done.

While JVM bytecode is statically typed, Clojure is an aggressively dynamically checked language which makes pervasive use of the Object type to delay typechecking. To this end, Clojure will use JVM Object reflection to resolve instance fields and methods when performing interoperation. While correct for unknown types, because reflective access is slow compared to direct access for known types it has long been possible to write type hints which advise Clojure about the runtime JVM type of a value and enable Clojure to use direct access and direct method invocation rather than reflective access.

However these hints are not types in the sense of a static type being a contract on the domain of values, they are merely hints for reflection elimination and place no contract on the domain of a hinted value.

This hinting behavior for reflection elimination comes at the cost of emitting checkcast instructions. As the JVM is statically typed, one cannot simply swear that a value is of a type, a checking cast must be used. Clojure, when emitting non-reflective method calls and field accesses, does not statically know (and makes no attempt to prove) that the value or expression in play is in fact of the type which you may have tagged it. All local variables and function parameters which are not JVM primitives are stored as Objects and so must be checkcasted to the desired type on every use.

So are we stuck trading slow reflective access for checkcast instructions (which are faster to be sure but cause method bloat when doing lots of interop on previously checked values)?

Of course not! While Clojure does not currently have support for real contractual types when using tags, we can sure add such behavior!

Now. Before you burn me at the stake for being a heretic, clearly since Clojure does not currently have strict local types, we can't just make tags strict. TEMJVM actually makes that mistake, and as a result cannot compile clojure/core because among others, clojure.core/ns-publics makes use of a type hint which while safe for nonstrict type tags is not correct in the context of strict tags. This has to be an additive, opt-in change.

So, what Alan and I did was create new special fn metadata flag ^:strict. If a fn body being compiled has the ^:strict tag, then and only then are are type tags treated as a contract rather than being advisory. This is a strictly additive change because stock Clojure will ignore the metadata and emit less efficient but still correct code.

So as an example, let's consider the following fn:

1 (defn sum ^long [^Iterable xs]
2   (let [iter (.iterator xs)]
3     (loop [tot (long 0)]
4       (if (.hasNext iter)
5         (recur (+ tot (long (.next iter))))
6         tot))))

Eliding a bunch of implementation details for brevity, this fn compiles on stock Clojure 1.7 to the following JVM bytecodes:

public final long invokePrim(java.lang.Object xs);
   0  aload_1 [xs]
   1  aconst_null
   2  astore_1 [xs]
   3  checkcast java.lang.Iterable [47]
   6  invokeinterface java.lang.Iterable.iterator() : java.util.Iterator [51] [nargs: 1]
  11  astore_2 [iter]
  12  lconst_0
  13  nop
  14  lstore_3 [tot]
  15  aload_2 [iter]
  16  checkcast java.util.Iterator [53]
  19  invokeinterface java.util.Iterator.hasNext() : boolean [57] [nargs: 1]
  24  ifeq 51
  27  lload_3 [tot]
  28  aload_2 [iter]
  29  checkcast java.util.Iterator [53]
  32  invokeinterface java.util.Iterator.next() : java.lang.Object [61] [nargs: 1]
  37  invokestatic clojure.lang.RT.longCast(java.lang.Object) : long [64]
  40  invokestatic clojure.lang.Numbers.add(long, long) : long [70]
  43  lstore_3 [tot]
  44  goto 15
  47  goto 52
  50  pop
  51  lload_3
  52  lreturn
    Local variable table:
      [pc: 15, pc: 52] local: tot index: 3 type: long
      [pc: 12, pc: 52] local: iter index: 2 type: java.lang.Object
      [pc: 0, pc: 52] local: this index: 0 type: java.lang.Object
      [pc: 0, pc: 52] local: xs index: 1 type: java.lang.Object

// Method descriptor #77 (Ljava/lang/Object;)Ljava/lang/Object;
// Stack: 5, Locals: 2
public java.lang.Object invoke(java.lang.Object arg0);
   0  aload_0 [this]
   1  aload_1 [arg0]
   2  invokeinterface clojure.lang.IFn$OL.invokePrim(java.lang.Object) : long [79] [nargs: 2]
   7  new java.lang.Long [30]
  10  dup_x2
  11  dup_x2
  12  pop
  13  invokespecial java.lang.Long(long) [82]
  16  areturn

So here we have two methods. The first one, the invokePrim, takes an Object and returns a primitive long since we long hinted our function. The invoke method is a wrapper around the invokePrim method which provides for "boxing" (wrapping in an object) the primitive result of calling invokePrim. This allows our fn to be used by code which wants and can use a long, and code which doesn't know/care and just wants an Object back like a normal fn would return.

So lets dig into the invokePrim method.

  1. Load xs off the arguments stack. It's just typed Object because that's the parameter type.
  2. Load the constant nil.
  3. Store the nil to the local named xs, thus clearing it. Note that in the locals table, xs has the type Object. This means that when we get xs from the local, we have to checkcast it again because we've lost type information by storing and loading it.
  4. checkcast the xs we loaded to Iterable since we don't really know what it is.
  5. invokeinterface of the .iterator method to get an Iterator from our now guaranteed Iterable.
  6. Store our Iterable into the iter local (discarding type information as above)
  7. Load the constant 0 from the class constant pool.
  8. Store the tot local (primitive typed)
  9. Load our iter
  10. checkcast because in storing it we forgot that it's an Iterator
  11. invokeinterface to see if there are elements left, producing a primitive boolean
  12. Branch on the boolean going to 21 (in this list) if false
  13. Load tot from the local
  14. Load iter from the local
  15. checkcast that iter is still Iterator
  16. invokeinterface to get the next value from the iterator, producing an Object
  17. invokestatic to call the static clojure.lang.RT method for converting Objects to primitive longs.
  18. invokestatic to add the two primitive longs on the stack
  19. Store the new value of tot
  20. Loop back to 10.
  21. Clear the stack
  22. Load tot
  23. return

So with the exception of the first checkcast to make sure that the Object we got as an argument that should be Iterable is in fact an instance of Iterable, the checkcasts after load are all provably uncalled for. The static types of these values is known because their Java signatures are known, and the only reason that we have to emit all these checks is that the Compiler throws that information away by storing these values in untyped (Object) locals.

The Hack

Every Expr in clojure.lang.Compiler already knows (or can state) its type either as tagged or inferred, and whether it has such a tag. However, these stated Java classes are lies! A function invocation (IFn.invoke call site) is statically typed to return Object (unless it's a primitive call site but we know that as well) no matter what the tag on the IFn being invoked may say. For example clojure.core/str is tagged ^String and does indeed return a String, however after invoking the appropriate IFn the JVM doesn't know that there's a String on the stack because the IFn interface discards that type information. It just knows it has an Object. The fix is that we add a Expr.needsCast method and implement it for every instance of Expr in Compiler.java. So now when in strict mode, we know that unless Expr.needsCast returns true, the value on the stack after Expr.emit absolutely is of type Expr.getJavaClass. Otherwise we cannot avoid the checkcast.

We also have to change the behavior of locals so that we can emit locals with types other than Object. By typing locals we preserve their type information as tagged or inferred across loads and stores. This allows the Expr representing a local use to report that it only needs a cast when the usage of the local doesn't have the same tag as the type of the binding and we cannot statically show no cast is required.

With these changes, our modified Compiler.java can indeed produce and use strictly typed locals. So lets add our annotation...

1 (defn ^:strict sum ^long [^Iterable xs]
2   (let [iter (.iterator xs)]
3     (loop [tot (long 0)]
4       (if (.hasNext iter)
5         (recur (+ tot (long (.next iter))))
6         tot))))

And generate bytecode on our modified version of Skummet 1.7-RC1-r4 (again abbreviated).

public final long invokePrim(java.lang.Object);
  Code:
     0: aload_1
     1: aconst_null
     2: astore_1
     3: checkcast     #30                 // class java/lang/Iterable
     6: invokeinterface #34,  1           // InterfaceMethod java/lang/Iterable.iterator:()Ljava/util/Iterator;
    11: astore_2
    12: lconst_0
    13: nop
    14: lstore_3
    15: aload_2
    16: invokeinterface #40,  1           // InterfaceMethod java/util/Iterator.hasNext:()Z
    21: ifeq          43
    24: lload_3
    25: aload_2
    26: invokeinterface #44,  1           // InterfaceMethod java/util/Iterator.next:()Ljava/lang/Object;
    31: invokestatic  #49                 // Method clojure/lang/RT.longCast:(Ljava/lang/Object;)J
    34: ladd
    35: lstore_3
    36: goto          15
    39: goto          44
    42: pop
    43: lload_3
    44: lreturn
  LocalVariableTable:
    Start  Length  Slot  Name   Signature
       15      29     3   tot   J
       12      32     2  iter   Ljava/util/Iterator;
        0      44     0  this   Ljava/lang/Object;
        0      44     1    xs   Ljava/lang/Object;

public java.lang.Object invoke(java.lang.Object);
  Code:
     0: aload_0
     1: aload_1
     2: invokeinterface #59,  2           // InterfaceMethod clojure/lang/IFn$OL.invokePrim:(Ljava/lang/Object;)J
     7: new           #13                 // class java/lang/Long
    10: dup_x2
    11: dup_x2
    12: pop
    13: invokespecial #62                 // Method java/lang/Long."<init>":(J)V
    16: areturn

The win compared to the original bytecode should be obvious. Sure enough in the invokeStatic method we only emit the one checkcast we absolutely have to have because the xs argument could really be anything. The tot and iter locals are both statically typed, and so we can just load them and invoke the appropriate interfaces directly.

In some simple benchmarks, this optimization on this fn translates to a 5%-10% performance improvement which isn't too impressive. However other fns like clojure.core/str in our testing were able to get up to 20% performance improvements from strict locals.

Disclaimer

This is the product of a two day hack. While Alan and I have been able to get it to work and emit working code, honestly this isn't something we're comfortable taking to production yet. Some clear wins such as being able to emit typed fn arguments by popping arguments, checking them and then putting them in typed local bindings for use and being able to take advantage of types on closed over locals remain on the table.

What Didn't (seem) To Work

While Alan debugged Compiler.java and picked off some types related wins we hadn't gotten yet I worked on adding more inlining behavior to clojure.core. Much of core, especially the lexically early fns are just thin wrappers around interop on clojure.lang.RT, which does have reasonable interface types on most of its methods.

The hope was that what with the typed locals work, preserving more type information across calls to the Clojure standard library and inlining the Clojure standard library where possible to interop calls with clearly inferable types we would be able to produce demonstrably faster code.

While in theory this should be at least break even and probably a win, we haven't managed to benchmark it well and show a clear win from the aggressive inlining work. In fact, the really interesting case of possibly aggressive inlining being an into which is able to use a typed, static Transient loop is impossible because into is implemented in terms of reduce, which takes the reducing fn as a value and then dispatches via clojure.lang.IReduce in order to get fast iteration over chunked seq. However we can't statically inline a call site through being taken as a value so that's the end of the line for that idea.

Inline Unrolling

We were however able to fully inline some interesting cases of clojure.core/assoc and clojure.core/conj. A common pattern in Clojure is to write a function f which has a zero arguments case returning a constant, a one argument case returning the argument unmodified and a two or more arguments case in which the operation f is reduced over the arguments provided. Rather than directly implement IFn, functions emitted by Clojure instead extend clojure.lang.AFn (source), which provides some out of the box support for functions of variable arity and function application.

Next Steps

These changes were motivated by internal performance requirements and will likely get polished until they are ready to be upstreamed back to Skummet. While we expect that the patch we have developed will never be included into Clojure as-is if only due to high impact, we hope to see this behavior or something like it enter the mainline in a future version of Clojure.

Edit 1: Skummet pull request submitted

Zach's Bookshelf

So ztellman posted some of his library and I was bored (and also more than somewhat curious what Zach reads) so I figured I'd make a list in the hope that at least I found some interesting reading material. It's not complete but it's pretty interesting as-is. I'll happily take changes if anyone else cares to read the original.

Top Shelf

  • The ocean at the end of the lane
  • Gravity's Rainbow
  • The Bug
  • The Cleanest Race
  • 99 novels
  • Everything and More
  • The Outsider
  • The Elements of Topographical Style
  • Mythologies
  • The Occult
  • The Hidden Dimension
  • The Sleepwalkers
  • Introduction to metaphysics
  • Notes on the synthesis of form
  • The Language of Instinct
  • Basic techniques of Go
  • Simulcra and Simulation
  • Godel, Escher, Bach
  • A Thousand Plateaus
  • Guattari
  • Warfighting

Second Shelf

  • Riddley Walker
  • Embassytown
  • Immortality
  • One hundred years of solitude
  • the general in the labyrinth
  • 2666
  • Veniss underground
  • The human comedy
  • budayeen nights
  • exile kiss
  • A fire in the sky
  • The Clockwork Orange
  • Cryptonomicon
  • Crystal Express
  • Count Zero
  • Snow Crash
  • Burning Chrome
  • The Diamond Age
  • Zero History
  • The Executioner's Song
  • Spook Country
  • Dirk Gently's Holistic Detective Agency
  • The Third Policeman
  • Death in Venice
  • The Book of Imaginary Beings
  • Sparrow
  • Illuminatus
  • At swim two birds
  • The Gray prince
  • The Demon Princess

Third Shelf

  • The Stories of Ray Bradbury
  • War against Cliche
  • The COmplete Stories of JG Ballard
  • Stories of Three Decades
  • Seven Science Fiction Novels of HG Wells
  • The Best of Gene Wolfe
  • The complete short stories of Ernest Hemingway
  • Italian Folktales
  • Case and the dreamer
  • The essential Pinter
  • Three Great Works

Bottom shelf

A literal shitload of Philip K Dick

Of Hobbes and Reason

This was originally a term paper of mine, written in markdown, and appears here because reasons.

Update: The revised and significantly better final verison of this paper now appears here. You aren't missing much, it's much better than the original.


In his Leviathan, Hobbes presents an epistemology, an account or ontology of the various passions and thoughts of people, and theory of ethics founded upon individual equality and egoism from which he develops a political theory of absolutism. This theory of ethics purports to show how humans existing in a "state of nature" at odds with all other humans can discover contracts and governments for their mutual protection through the use of what Hobbes names Reason.

Hobbes defines Reason to be the faculty of the mind whereby we perform any task founded upon the "adding" and "subtracting" of ideas to produce and communicate results (5.2). In 5.4, Hobbes expounds that Reason "is not the finding of the sum and truth of one of a few consequences, remote from the first definitions and settled significations of names, but to begin at these and proceed from one consequence to another". Geometry, logic and law are listed as examples of such consequential productions from first definitions to conclusions. An interesting aspect of Hobbes' conception of Reason is that it explicitly excludes his conception of Prudence (5.4).

Where Hobbes conceives Reason to be quite literally symbolic arithmetic, this conception of Prudence is distinct from and free of Reason as it has no component of Language, merely that of Memory. Prudence is defined to be that faculty of remembering past events and their sequels and, supposing that like sequels will follow like actions, projecting the consequences of like actions on the present condition (3.7). This point Hobbes emphasizes by pointing out that it is not prudence which distinguishes man from beast (3.9), for there are "beasts that at a year old observe more and pursue that which is for their good more prudently than a child can do at ten" but rather Reason alone which accounts for the elevation of man by God, the "first author of speech" (4.1).

This distinction between Reason with Language and Prudence from Experience is critical for Hobbes, as the goal of his project is to show that humans can invent his platonic state and all the requirements for it from a state of nature having no prior experience of a state of any kind. This production demands that there be no possibility that human beings miss the mark and fail to arrive at his platonic state. This distinction Hobbes illustrates (4.21) by describing two swordsmen, one of infinite Prudence, the other of infinite Science both having total mastery of their weapon. Both having total mastery, they are equals and well matched, "both useful, but the latter [Science/Reason] infallible".

This distinction between Prudence and Reason in infallibility is critical for Hobbes' defeat of The Fool. Were the derivation of the command to keep contracts founded on experience or aught but Reason common to all people, then it could not be a universal command for there would exist the possibility that some person having a life experience which lead them to conclude that from Prudence they need not keep their contracts. While Hobbes allows that some people from Pride or other faults will fail to realize the truth of his assertions and that society many (indeed must) be established to thwart them

Just as Hobbes' conception of Reason is not in keeping with the definition of reason in common usage, so his production of Natural Law and his plan for the escape from the War of All Against All involve a little gymnastics. In order to reach at Hobbes' derivation of Natural Law, we must first appreciate his conception of Passions and the Will presented in chapter 6 of Leviathan as it is ultimately the will of a man (in the common not Hobbesian sense) which determines that which he does and under what principles he will do so.

Through Deliberation (6.49) humans take the sequent sum of their appetites, aversions, hopes, fears and other interests. The Will of a human is, for Hobbes, then the last element or term in this sequence of inclinations and aversions and it is then from this final Will that actions occur (6.53). An important aspect of this production as noted in 6.53 is that this definition of the Will by no means demands that the Will be rational. As the sum of inclinations and aversions, the Will of a man may indeed be counter to all reason (this being insanity) and yet still constitute voluntary action. However, the Will of a man according to Hobbes must include Reason as a factor, for part of the calculation of Will is Hope which is defined to be the Opinion (7.4) that something is possible or achievable as a product of Reason (5.16,17).

Natural Law (14.3) is not so much then the an absolute rule or set of principles which humans need rationally and consciously seek and discover each for themselves. Rather it is a set of products of the various inclinations and aversions of man presented in chapters 11 and 13 which influence the Will of all humans overwhelming other "lesser" inclinations and aversions by their weight in the calculation of a person's Will. Of such weight are these considerations that in all cases where they become engaged they must dictate the outcome of Deliberation unless the person Deliberating commit some error of Reason such as to suppose false premises.

The Right of Nature (14.1) exemplifies this pattern, in its declaration that "each man hath to use his power as he will himself for the preservation of his own nature". That is, from 13.14 "the passions that incline men to Peace are the Fear of death" for Fear (6.17) in keeping with the common usage. As death is the end of life and thus frustrates all Hopes unless the alternative to death be itself death, no computation of Will according to Hobbes could outweigh the horror of suffering death. The horror of death being common to all people, thus unless they Will against all Hope or reason (14.4) no person many take their own life or take actions which they may foresee by Reason or even Prudence will lead to their own death.

Hobbes' First Law of Nature follows from a similar production. As all people may lay equal claim to any thing out of desire for ease, power or any other reason (11) with equal strength to destroy one another and hold the desired article (13.1) this being Hobbes definition of War (13.7) by the same argument to the fear of death used in the construction of the Right of Nature all people, save perhaps glory seekers, must revile the condition of War for it is by definition the persistent threat of death being the most fearful possible thing. Rather, people should "seek peace and follow it", that is take whatever steps necessary to escape the state of nature and prevent a return to it through the establishment of governments, regulation of their own conduct and so forth or failing this the Right of Nature demands the conclusion that if there be no hope of Peace and people must exist in the state of War, then they must Will to employ all the advantages of war to protect themselves.

The Second Law of Nature is derived from the First, and depends on the idea in the Right of Nature that through equal strength (capacity of threatening death to another) all people have equal Right or Liberty (14.3, 14.4) to claim all things. If this be true, then the only path to reducing the fear that another may assert their Liberty at ones' expense is to somehow get others to abandon their Rights thereby reducing competition and distrust. As no person will lay down a Right(s) and thus ensure the security of another without due compensation, exchanges of Rights must occur and these Hobbes terms to be a Contract (14.9). The proof of such an exchanges being that as by strength (even of arms) no person can secure themselves (13.1) and the sources of Fear are competition, diffidence and glory seeking (13.6) no person can reduce the competition by elimination of their competitors without thereby increasing diffidence towards them, nor so eliminate the glory seekers. Only by seeking aid (11.5) in the form both of protection and the renunciation of others liberties at the price of their own various liberties may people reduce their fear and achieve Peace. As with the First Law of Nature, while by reason one may formally deduce this Law of Nature, people will exemplify it as an emergent phenomenon for it is the only mechanism by which individuals may reduce the competition which threaten them and thus achieve Peace.

The Third Law of Nature follows from the Second and the First. As without Contracts being kept, no person may have Hope for Contracts allowing them to reach the state of Peace, Reason gives the result that no person can Will the abandonment of Contracts without also abandoning those which bind their erstwhile competitors from causing them harm for The Fool's abandonment of Contracts signals to others that their Contracts for safety with The Fool may not be honored and thus while The Fool may have made no direct threat against anyone save the person damaged to attack one is to renounce the support of all. "He, therefore, that breaketh his covenant and consequently declareth that he thinks he many with reason do so, cannot be received into any society that unite themselves for peace and defense but by the error of them that receive him; nor when he is received be retained in it without seeing the danger of their error; which errors a man cannot reasonably reckon o as the means of his security; and therefore if he be left or cast out of society he parisheth" (15.5). Thus the relative certainty of death should one become either cast out of or simply unable to enter a civil society and the fearfulness of the prospect of death factored together must compel people by Reason on the weight of their fear to keep their contracts lest they be unable to escape the state of War.

Students of game theory should readily realize that Hobbes here is simply founding his ethics upon the Nash equilibrium of a game of mutually assured destruction. That is, Hobbes' theory of human nature and the State of Nature is so constructed that any behavior by any person but that "dictated" by the Laws of Nature is with certainty harmful (or at least fearful) compared to the outcome of behavior in accordance with the Laws. The other Laws of Nature presented by Hobbes in chapter 15 are merely corollaries to the above three Laws supporting the equilibrium of peaceful mutual contracting. Those dictates to gratitude, complacence equity and pardon are not fundamental to this theory. Rather they are consequences serve to reduce conflict and thus preserve peace. The same is true of the others, especially the injunctions against arrogance and pride which Hobbes explicitly lists in chapter 6 as being the vain-glorious cause of strife. These corollaries correspond directly to the game theoretic results of the optimally of cooperation and forgiveness from the Prisoner's Dilemma problem. For their proof, Hobbes need refer only to the Right of Nature and the principles of fear of death and equality thus giving this behavior as universally optimal.

While this interpretation of Hobbes works in that it is consistent with his definitions of Will, Deliberation and consequently validates his theory of Natural Law, that it derives Natural Law from Reason rather than from Deliberation is not as clear cut as I would wish. Remember Reason is defined to be the process of adding and subtracting quantities denoted by language to produce sums to which language alone gives meaning, this being the reason Hobbes gives for Reason's apparent uniqueness to humans and absence among beasts. The above interpretation of Natural Law derives entirely from the approbations and fears of people abstracted to be only sufficiently conscious as to recognize that which of which they approve or disapprove and to perform almost utilitarian calculations thereon (this being the nature of Deliberation) in accordance with the principle of egoism not of general public utility. As Hobbes gives us language for expressing our appetites and aversions and in so doing ascribes such language to mankind generally one may argue that the process of Reason on the language (being the summation) of appetites and aversions is strictly equivalent to the calculation of Deliberation being likewise the summation of appetites and aversions. This is not correct however because Hobbes specifically ascribes Reason only to the application of language (thus in exclusion of dumb beasts) and to beasts ascribes Deliberation and thus Will. The salvation of this reading seems to be in 6.55 in which Hobbes gives forms of speech expressing the various passions of people and also in the conception of Hope as a factor in Deliberation (unique to humans due to its foundation on Reason being the Opinion that something is possible). This gives us a round trip production between appetites and aversions through Language to Deliberation, and as we have the Deliberative subjunctive forms of speech (6.55) we may speak of things to come from appetites and aversions thus having Hope for our appetites and thus by Reason having Deliberations and Wills conducive to our goals.

The obvious criticism of this account of natural law is that it presumes humans to be rational actors who fear death overwhelmingly and above all else. Clearly this is a false premise. People voluntarily sacrifice their own lives for those of their friends and family, risk death for strangers in acts of heroism and so forth. Hobbesian selfish actors would do no such things. One could posit that Hobbes would say those who choose to so sacrifice themselves simply cast up wrong reason in finding their Will weighing Valor too heavily next to their own life. While Hobbes does not directly address this concern, from the game theory results the presence of some optimistic players does not alter the Nash equilibrium so long as the players are generally willing to retaliate under Lex Talonis or the Right of Nature as Hobbes terms it.

A related criticism is that Hobbes' formulation of the Right of Nature forbids suicide under any circumstances. However one could clearly argue that if the alternative to death was ongoing suffering or otherwise degraded quality of life the pain of such would outweigh any Hope of other pleasures or ends on which the dominant fear of death is founded thus giving us a quantity which in the reckoning of Deliberation must cast up death as the Will.

The formulation above Natural Law from Deliberation (being from Language and appetites) seems potentially problematic in that it fails to give any account of how through like Deliberation bests without the use of language may not practice Natural Law out of their own Deliberation purely from their appetites and fears. To this Hobbes would seem to answer that without the use of Language animals cannot truly have Reason nor communicate it to others like themselves though they may by Prudence practice in accordance with the Laws of Nature and the Right of Nature, they cannot have found the Laws for the production of the Laws requires the application of Language and Reason to find the consequences of obeying and disobeying the Laws and having weighed them both to find the Laws universally superior to their alternatives.

Thus while perhaps Hobbes gives a poor accounting of why we should treat others well in comparison to Kant and other philosophers his ontology and theory of natural law is at least internally consistent albeit somewhat difficult to understand due to pervasive overloading of terms in common usage it remains a complete accounting of how reason and reason alone can lead humans to discover Natural Laws, and thus liberate themselves from the State of Nature through the establishment of a Commonwealth.

^d

Toothpick: A theory of bytecode machines

In working on my Batbridge simulators and other adventures in computing hardware, one of the problems I have repeatedly encountered is the need to generate bit encodings of symbolic instructions. When working with "well known" architectures with existing tooling support, one can usually find an "off the shelf" assembler either in the GCC collection or elsewhere. However when you're working against a custom instruction set you're on your own for tooling.

So lets invent yet another assembler!

An assembler is simply a program that describes how to construct a binary instruction encoding from a set of symbolic human-writable mnemonics representing data manipulating directives to computing hardware. add, sub[tract], mul[tiply] and jump are good examples of such mnemonics. Each operation which a given "machine" (hardware/software interpreter) will accept is typically specified as a sequence of bits and bit encoded fields (sometimes abbreviated in octal or hexadecimal notation). So Batbridge for instance specifies that the add instruction is the bit string 110000tttttaaaaabbbbbiiiiiiiiiii where the prefix 110000 or 0x30 indicates the addition operation, the bits named t encode a number t∈[0..30] being the register to which the result of the addition will be stored, the bits named a subject to the same constraints as t name the register from which the "left" operand will be drawn and the b bits name the "right" operand same as the t and a operands. i is an arbitrary signed 10 bit integer (sign is the 11th bit).

Sitting down with a specification, you or I could construct a set of functions from values appropriately constrained to bit encoded instructions as laid out in an instruction set specification. However in doing such an assembler implementation, you will discover that you are simply encoding in the programming language of your choice a translation table laid out in full by the instruction set documentation. However realizing that the assembler which we are constructing by hand represents a printer to the bytecode machine's reader, we can think of bytecodes as a language in the same sense that any other set of strings and productions constitutes a language.

A Theory of Bytecode Machines

It happens that we have good formal theories about languages and their structures. If we stop and squint, we can see that the individual opcodes are analogous to terminals or verbs. If we allow ourselves to model opcodes as words in a language, then a program (a production of words) is a sentence for which we can establish a grammar. For instance a ELF formatted program could be considered to be a sequence of blocks of instructions (verbs) defined in terms of words (opcodes) and formatted with an appropriate header/footer (the ELF file structure). As these are all linguistic constructs, a program capable of generating these values must be a direct production of the specification which lays out these productions in the same way that a lex/yacc lexer parser pair is a mechanical production of the lexing and parsing rules which define a computer language.

This leads to the idea that, just as we now rarely write parsers and lexers preferring to derive them automatically from specifications of the languages we wish them to process since a as suggested above a bytecode is simply a language on bit strings rather than ASCII or Unicode characters the consequent that we can use the same class of tools to generate assemblers and disassemblers as we use to generate parsers and printers should be obvious. We just need to construct formal languages describing the instructions our machines accept.

An Assembler Generator

Just as we have theories about "interesting" sets of languages and the parser techniques which may be applied to efficiently recognize and process them, so too rather than being able to automatically assemble arbitrary programs to arbitrary bytecodes we must first recognize that as a bytecode is a language on bit strings and as there exist languages which require a Turing machine to recognize in addition to uncomputable languages consequently one could invent an "interesting" (but I will argue in a minute silly) bytecode for which no assembler or recognizer can be automatically generated.

So what features define "interesting" bytecodes? In hardware terms as I discuss elsewhere memories and lookup tables are just about the most expensive thing you can build in terms of chip area and time. As a result, the overwhelming majority of computer architectures are not what I shall call "micro-programmable", that is users cannot give meaning to new sequences of bits and rather interact with the computer by chaining together a few bit-verbs or opcodes which are built into the chip. There do exist such micro-programmable computers, FPGAs, but they tend to be special purpose prototyping machines and are not in wide usage compared to fixed instruction set machines.

Another characteristic of "interesting" bytecodes is in the name, bytes or words. Many years have gone by since bit addressed machines were last built with commercial intent. Instead commercial machines and off the shelf memories tend to address "blocks" of bits or "lines" being strings a power of two in length which may be divided into substrings smaller powers of two in length as the user may see fit at no additional hardware complexity cost by simply selecting bits through selecting predefined subdivisions of lines known as "words" or using bit masking and shifting to select strings smaller than a single word. Strings larger than a single word must be dealt with by using multiple operations on words.

This theory of a fixed word size, fixed instruction set machine characterizes the overwhelming majority of instruction sets and machines designed and built by Intel, ARM, Symbolics, Thinking Machines, and others over the past decades due mainly to the ease of implementation and efficiency of such designs.

So what does this theory about a "useful" bytecode machine buy us? It implies that we can describe a "useful" machine as a bitstring length denoting word size. Endianness must also be considered. Then given an enumeration of the various opcodes and how to generate them as bitstrings, you can completely describe an "interesting" instruction set as characterized above.

Assemblers are simply programs that deal with translating instructions set mnemonics writable by a programmer or more often than not a compiler to bytecodes which a machine can execute. The details of computing label addresses are entirely for the most part independent of the specific architecture in so much as they are determined by properties of the specific target bytecode described above. This suggests that there must exist a function

(assemble p ← Platform c ← Codes)

Which makes sense if you consider a single target assembler to be the partial application of such an assembler to a platform.

Talk is cheap, show me the code

First things first, we need a way to represent an "interesting" instruction set.

(deftype Architecture
    "A structure representing an \"interesting\" bytecode architecture
    from which a parser or a generator can be computed."
  [word-width         ;; ← Num, how many bits in a word
   pointer-resolution ;; ← Num, how many bits forwards does *(x+1) go
   opcodes            ;; ← Map [Memonic → Opcode], opcode formatting rules
   ])

So what do individual opcodes look like? We're gonna need some bit vector formatting engine to do real code bytecode generation with.

I will define an opcode to be a bitstring of fixed length (this is important, there are instruction sets with variable length "opcodes" however as the different length forms have different operational semantics I shall define them to be different operations and declare the instruction set architects silly people) composed of a sequence of concatenated parameters or fields and constants which may be either signed or unsigned. Fields are named or anonymous. The set of named field names in a single opcode must have no repetitions. We want to be able to write something like the following

;; IFLT 0x20  100000 _____ aaaaa bbbbb iiiiiiiiiii
;; execute the next instruction IFF (< a b)
(opcode :iflt
  (const-field            :icode 6  0x20)
  (enforced-const-field   :_     5  0)
  (parameter-field        :a     5  register?)
  (parameter-field        :b     5  register?)
  (signed-parameter-field :i     11 literal?))

So in this example instruction from Batbridge, we define an instruction as the ordered bit string where the bits [0..5] are named :icode and fixed with the constant value of 0x20. We then have an anonymous field 5 bits long taking the bits [6..10] which we force to have the value of 0 since they are defined to be ignored by the instruction set. We then have a pair of (unsigned) 5-bit registers, which must satisfy some guard predicate ensuring that the value in question fits within the value domain which the instruction set allows for registers. Finally we have a signed field 11 bits long, which again is guarded with a value domain predicate. This notation encodes the bit format string, as well as the value domains for each element of the bit format string in a form that can be easily inspected by either an assembler or a disassembler program.

So what does that expression build out to as a data structure?

{:width 32,
 :params (:_ :a :b :i),
 :fields ({:offset 26,
           :width 6,
           :name :icode,
           :type :const,
           :value 35}
          {:offset 21,
           :width 5,
           :name :_,
           :type :enforced-const,
           :value 0}
          {:offset 16,
           :width 5,
           :name :a,
           :type :unsigned-field,
           :pred #<batbridge$register_QMARK_ toothpick.isa.batbridge$register_QMARK_@4bd4095>}
          {:offset 11,
           :width 5,
           :name :b,
           :type :unsigned-field,
           :pred #<batbridge$register_QMARK_ toothpick.isa.batbridge$register_QMARK_@4bd4095>}
          {:offset 0,
           :width 11,
           :name :i,
           :type :signed-field,
           :pred #<batbridge$literal_QMARK_ toothpick.isa.batbridge$literal_QMARK_@38334886>})}

Given this datastructure, and a sequence (opcode . params), we can trivially compute a map (zipmap (:params (get isa opcode)) params), and then fold right over the fields of the instruction computing each field then shifting and anding it into place in the bit vector to construct the bit encoding of the instruction.

As implemented this is a little fragile because it assumes that the computer running the assembler assembler has a larger or equal integer size than the target, as attempting to construct an int aka bit vector that's too large will yield... interesting issues. Future fixes are to use a real sequence of bits, or to rewrite this logic in terms of multiplication by two and addition.

Deriving a parser for this structure is also (fairly) straightforwards, one simply can simply generate an alternation matcher on bitstring matchers matching the individual instructions. "interesting" ISAs tend to put the opcode specification in the "low" bits of each instruction word so as a we can simply generate a jump table by masking on the least significant bits but that's not implemented yet.

Also this representation falls into jneen's type tagging pitfall, but this is old code and implementation details thereof so I'll forgive myself and fix it later.

We can then define an "architecture" by constructing a composite map as above having a word size and pointer interval specification and a map from keywords naming instructions to many such instruction maps.

So, we can describe an instruction stream as a sequence of instructions, and we can assemble individual instructions by interpreting from their representation above, so if we wanted to encode a sample program

[[:label :top]
 [:op [:add [:param [:register 0]]
            [:param [:const 0]]
            [:param [:const 1]]]]
 [:label 1]
 [:op [:add [:param [:register 1]]
            [:param [:const 0]]
            [:param [:const 2]]]]
 [:label 2]
 [:op [:add [:param [:register 0]]
            [:param [:register 0]]
            [:param [:register 1]]]]
 [:label 3]]

We can simply foldr a label location computing operation since all of our opcodes are defined to be fixed length even if they are abstractly variable length, then in a second pass we assemble each instruction against the label location context and the instruction set, giving us a sequence of bytecodes. Done! ∎

This is essentially my Toothpick project, which seeks to provide exactly this meta-assembler facility and has already proven its worth to me in automating the generation of assemblers for the various toy architectures with which I've worked at school. It's not done yet and as noted above it needs some spit and shoeshine but I think it represents a remarkably simple and powerful idea.

The real power of this approach is that, extended, this can enable the automatic generation of LLVM backends! If an instruction specification were to include its operational semantics written in terms of LLVM instructions one could imagine a trivial derived LLVM backed which sequentially scans emitted LLVM assembly matching out the platform translation of the instruction "at point" and then assembling the resulting instruction stream as above. Since LLVM already provides a massively portable C compiler infrastructure, this means that given only a formal platform specification one could construct a mediocre C compiler only from the platform specification.

To take this idea even further, one could also imagine generating a platform simulator from either symbolic instructions or bit encoded instructions from only the augmented ISA with semantics instructions.

Generating muxing logic in Verilog or VHDL should also be trivial from this structure.

Generate all the things! Correctness by specification & construction! Toy architectures for everyone!

^d

牛: Some motivation

Here is some of my motivation for Oxlang. I'm sorry it isn't shorter, I gave up on copy editing to turn in. This was originally a personal email and appears almost unaltered.

Clojure is awesome. It is grammatically simple with a classical lispy syntax, and its cheap immutable data structures enable equational reasoning about most problems and solution tools.

My top bugbears & goals:

  1. Contributor culture. The contribution process should be as open, friendly and low friction as possible see Guy Steele's comments, on growing a better language from one with warts through acceptance of contributions. The GHC team has a "3% rule", in that if a change improves the language or the type checker and comes at a cost of less than a 3% slowdown it has a high probability of being accepted and acceptance/rejection is publicly debated ala LKML. Commentary on the Clojure process may be inferred.

  2. Formalism. Formalism gives portabity and is the only real rout thereto. Parser spec. Evaluator spec. Kernel standard library spec with a features list. This enables portability of code and data structures. Keeps us out of the JVM centric mess that Clojure itself is close to right now.

  3. Documentation. I built Grimoire (Introduction post) because frankly I think that the existing Clojure documentation is lacking and there is not interest from the Core team in improving it as evidenced by the consistent rejection of patches which change doc strings beyond simple typo fixes. Goes with formalism requirements.

  4. Hide the host. That formalism should include the call stack and exceptions. Use the host under the hood sure, but don't leak the host for the most part. To paraphrase Paul Phillips, Just because Java or Clojure jumped off a bridge doesn't mean we can't have nice things like Data.Ord. This likely means making interop deliberately second class. Possible even explicitly generated from introspection but not the common case. Musings on this may be found here.

  5. Types (and protocols/interfaces) matter. Clojure and much of the Cognitect crew seem to pretend that they don't. It feels like not a week goes by without someone in IRC or twitter posting a function that checks the run time type of some value against some implementation type of Clojure's core. Aphyr obligingly posted today's, clojure.core.incubator/sequable, I have a few. Enable static reasoning rather than pretending it doesn't matter and that nobody does it. Edit: From the horse's mouth. Bake in type inference and something akin to prismatic/schema.

  6. Purify the language implementation. Practice what you preach. Oxlang/Kiss's immutable environments absolutely required. Death to &env, *ns* and mutable namespaces, do reloading by dependency tracking.

  7. Give thought to being self-hosting. It's not essential off the bat, but it makes porting & compatability easier in the long run. GHC's bootstrap & rebuild itself process beccons and Toccata plays the same trick.

  8. Give more thought to the module system & versioning culture. I want to be able to depend on "something implementing these functions in the namespace clojure.core then require those and alias them in as such", not "Dear lein, this exact version of this library, thanks no ranges can't trust anyone. Okay lets just load crap up". The language should enforce versioning b/c programmers sure as hell can't be trusted with it. Note that this goes with my first point to lower the cost(s) of churn in the standard library.

  9. Tooling. I'm with Chas in his comments here. As hinted in 8, the language needs to provide as much support as possible for editors, for REPLs, for packages, for optimizing, for testing. These are the things that make or break the UX of the language and if the UX isn't good people will just go back to writing node.js or whatever the latest flavor of the week is. Tooling is hard and takes time, but it must be a goal. I also went over this in 'On Future Languages'.

All of these (save the parser and evaluator spec) are goals towards which work can be accumulated over time once the semantics of the language are agreed upon. Tooling, types, speed, correctness analysis can all be done if changes (growth) is cheap and the goals are agreed upon.

I think that given these changes it would be pretty easy to design embedded and memory restricted targets for "Clojure", and I'll admit that the goal of writing an OS with this language is near to my heart but for the most part I want to improve the platform independent experience since most of the code I write isn't platform bound it's exploratory scratch work.

^d