On Student Startups

When I enrolled in UT Austin's "student startup semenar" one of the guest speaker comments which stood out to me and has stuck most firmly in my mind is that "there are no new ideas, only good execution". This particular lecturer described how he kept a notebook full of random ideas he had for possible businesses, and talked at length about the importance of validating business models through surveys of potential customers as well as discussions with industry peers. The takeaway he left us with was that consequently rather than attempting to operate in "stealth" mode as seems to be fashionable for so many startups developing a product, he argued that ideas are so cheap and the first mover advantage so great due to simple startup execution costs that attempting to cloak a startup's model and/or product generated no measurable advantage and had a concrete cost in terms of potential comment from consumers and peers which is lost as a consequence of secrecy.

Of the dozen or so startups I've interacted with so far, both in and outside the context of the abovementioned startup seminar, I've seen this overvaluing of secrecy over and over again, especially when requesting feedback on an idea. On Freenode's #clojure channel we have standing joke: the "ask to ask" protocol. Under the ask to ask protocol, some first timer will join the channel and ask if anyone knows about some tool X whereupon some longtime denizen will invoke the ask to ask protocol and tell the newcomer to just ask his real question.

When I see a request from a nontechnical founder for technical feedback over a coffee date, all I can think of is the ask to ask protocol and the litany against startup secrecy. A coffee date is a commitment of potentially several hours of what would otherwise be consulting time, whereas an email or a forum comment is free. By asking that an engineer go on a coffee date to hear a pitch and then comment, the petitioner (see nontechnical founder normally) is entirely guilty of asking to ask and of limiting themselves to at best one or two responses when several could have been had were the real question posed rather than an ask to ask instance.


Of Oxen, Carts and Ordering

Well, the end of Google Summer of Code is in sight and it's long past time for me to make a report on the state of my Oxcart Clojure compiler project. When last I wrote about it, Oxcart was an analysis suite and a work proposal for an emitter.

To recap the previous post: Clojure uses Var objects, an atomic, threadsafe reference type with support for naming and namespace binding semantics to create a literal global hashmap from symbols to binding vars, with a literal stack of thread local bindings. These vars form the fundimental indirection mechanism by which Clojure programs map from textual symbols to runtime functions. Being atomically mutable, they also serve to enable dynamic re-binding and consequently enable REPL driven development.

But for this second aspect of providing for dynamic redefinition of symbols, Clojure could be statically compiled eliminating var indirection and achieving a performance improvement.

Moreover, in the style of Clojurescript, exposing the full source of the language to an agressive static compiler could yield total program size improvements in comparison to programs running on the official Clojure compiler/runtime pair.

So. This was the premise upon which my Project Oxcart GSoC began. Now, standing near the end of GSoC what all has happened, where does the project stand and what do I consider results?

As of this post's writing, e7a22a09 is the current state of the Oxcart project. The Oxcart loader and analyzer, built atop Nicola Mometto's tools.analyzer.jvm and tools.emitter.jvm, is capable of loading and generating a whole program AST for arbitrary Clojure progrms. The various passes in the oxcart.passes subsystem implement a variety of per-form and whole program traversals including λ lifting, use analysis, reach analysis and taken as value analysis. There is also some work on a multiple arity function reduction system, but that seems to be a problem module at present. The oxcart.emitter subsystem currently features two emitters, only one of which I can claim is of my authoring. oxcart.emitter.clj is a function from an Oxcart whole program AST to a (do) form containing source code for the entire program as loaded. This has primarily been a tool for debugging and ensuring the sanity of various program transformations.

The meat of Oxcart is oxcart.emitter.jvm, a wrapper around a modified version of clojure.tools.jvm.emit which features changes for emitting statically specialized bytecode which doesn't make use of var indirection. These changes are new, untested and subject to change but they seem to work. As evidenced by the bench-vars.sh script in the Oxcart project's root directory, for some programs the static target linking transform done by Oxcart can achieve a 24% speedup.

Times in ms.
Running Clojure 1.6.0 compiled test.vars....
Oxcart compiling test.vars....
Running Oxcart compiled test.vars....

How/why is this possible? The benchmark above is highly artificial in that it takes 500 defs, and tests over several thousand iterations of selectively executing half of them at random. This benchmark is designed to exaggerate the runtime cost of var indirection by being inefficient to inline and making great use of the (comparatively) expensive var dereference operation.

So what does this mean for Clojure? Is Oxcart proof that we can and should build a faster Clojure implementation or is there a grander result here we should consider?

Clojure's existing compiler operates on a "good enough" principle. It's not especially nice to read nor especially intelligent, but it manages to produce reasonably efficient bytecode. The most important detail of the reference Clojure compiler is that it operates on a form by form basis and is designed to do so. This is an often forgotten detail of Clojure, and one which this project has made me come to appreciate a lot more.

When is static compilation appropriate and valuable? Compilation is fundamentally an analysis and specialization operation designed to make a trade off between "start" or "compile" time and complexity and runtime performance. This suggests that in different contexts different trade offs may be appropriate. Furthermore, static compilation tends to inhibit program change. To take the extreme example of say C code which is directly linked change in a single function, if a single function increases in bytecode size so that it cannot be updated in place. In this case all code which makes use of it (worst case the rest of the program) must be rewritten to reflect the changed location of the changed function. While it is possible to build selective recompilation systems which can do intelligent and selective rebuilding (GHCI being an example of this), achieving full compilation performance at interactive development time is simply a waste of time on compilation when the goal of REPL driven development is to provide rapid feedback and enable exploration driven development and problem solving.

While vars are clearly inefficient from a perspective of minimizing function call costs, they are arguably optimal in terms of enabling this development style. Consider the change impact of altering a single definition on a Clojure instance using var indirection rather than static linking. There's no need to compute, recompile and reload the subset of your program impacted by this single change. Rather the single changed form is recomputed, and the var(s) bound by the recompiled expression are altered. In doing so, no changes need be made to the live state of the rest of the program. When next client code is invoked the JVM's fast path if any will be invalidated as the call target has changed, but this is handled silently by the runtime rather than being a language implementation detail. Furthermore var indirection means that compiling an arbitrary Clojure form is trivial. After handling all form local bindings (let forms and soforth), try to resolve the remaining expressions to vars in the mapped namespace, and to a class if no var is found. Brain dead even, but sufficient and highly effective in spite of its lack of sophistication.

While, as Oxcart is proof, it is possible to build a static compiler for some subset of Clojure, doing so produces not a static Clojure but a different language entirely because so much of the Clojure ecosystem and standard library even are defined in terms of dynamic redefinition, rebinding and dispatch. Consider for instance the multimethod system, often lauded with the claim that multimethod code is user extensible. Inspecting the macorexpand of a defmethod form you discover that its implementation far from being declarative is based on altering the root binding of the multimethod to install a new dispatch value. While it would be possible to statically compile this "feature" by simply collecting all defmethods over each multimethod and AOT computing the final dispatch table, however this is semantics breaking as it is strictly legal in Clojure to dynamically define additional multimethod entries just as it is legal to have an arbitrary do block containing defs.

So, in short, by trying to do serious static compilation you largely sacrifice REPL development and to do static compilation of Clojure you wind up defining another language which is declarative on defs, namespaces and dependencies rather than imperative as Clojure is. I happen to think that such a near-clojure static language, call it Clojurescript on the JVM, would be very interesting and valuable but Clojure as implemented currently on the JVM is no such thing. This leads me to the relatively inescapable conclusion that building a static compiler for Clojure is putting the cart before the ox since the language simply was not designed to benefit from it.

Moving Forwards

Now. Does this mean we should write off tools.emitter.jvm, Clojure in Clojure and Oxcart? By no means. tools.emitter.jvm uses the same var binding structure and function interface that Clojure does. Moreover it's a much nicer and more flexible single form level compiler that I happen to think represents a viable and more transparent replacement for clojure.lang.Compiler.

So what's that leave on the table? Open question. Clojure's compiler has only taken major changes from two men: Rich and Stu. While there is merit in considered design and stability, this also means that effort towards cleaning up the core of Clojure itself and not directed at a hard fork or redesign of Clojure is probably wasted especially in the compiler.

Clearly while var elimination was a successful optimization due to the value Clojure derives from the Var system it's not a generally applicable one. However it looks like Rich has dug the old invokeStatic code back out for Clojure 1.7 and the grapevine is making 10% noises, which is on par with what Oxcart seems to get for more reasonable inputs so we'll see where that goes.

While immutable datastructures are an awesome abstraction, Intel, AMD and ARM have gotten very good at building machines capable of exploiting program locality for performance and this property is fundamentally incompatible with literal immutability. Transients can help mitigate Clojure's memory woes, and compiler introduction of transients to improve memory performance could be interesting. Unfortunately again this is a whole program optimization which clashes with the previous statement of the power that Clojure derives from single form compilation.

Using core.typed to push type signatures down to the bytecode level would be interesting, except that since almost everything in Clojure is a boxed object and object checkcasts are cheap and this would probably result in little performance improvement unless the core datatypes were reworked to be type parametric. Also another whole program level transform requiring an Oxcart style whole program analysis system.

The most likely avenue of work is that I'll start playing with a partial fork of Clojure which disassociates RT from the compiler, data_readers.clj, user.clj and clojure/core.clj. While this will render Oxcart even more incompatible with core Clojure, it will also free Oxcart to emit clojure.core as if it were any other normal Clojure code including tree shaking and escape the load time overhead of bootstrapping Clojure core entirely.

This coming Monday I'll be giving a talk on Oxcart at the Austin TX Clojure meetup, so there's definitely still more to come here regardless of the Clojure/Conj 14 accepted talk results.


Of Mages and Grimoires

When I first got started with Clojure, I didn't know (and it was a while before I was told) about the clojure.repl toolkit which offers Clojure documentation access from within an nREPL instance. Coming from the Python community I assumed that Clojure, like Python, has excellent API documentation with examples that a total n00b like myself could leverage to bootstrap my way into simple competence.

While Clojure does indeed have web documentation hosted on GitHub's Pages service, they are eclipsed in Google PageRank score if not in quality as well by a community built site owned by Zachary Kim: ClojureDocs. This wouldn't be a real problem at all, were it not for the fact that ClojureDocs was last updated when Clojure 1.3 was released. In 2011.

While Clojure's core documentation and core toolkits haven't changed much since the 1.3 removal of clojure.contrib.*, I have recently felt frustrated that newer features of Clojure such as the as->, and cond->> which I find very useful in my day to day Clojure hacking not only were impossible to search for on Google (being made of characters Google tends to ignore) but also didn't have ClojureDocs pages due to being additions since 1.3. Long story short: I finally got hacked off enough to yak shave my own alternative.




Grimoire seeks to do what ClojureDocs did, being provide community examples of Clojure's core functions along with their source code and official documentation. However with Grimoire I hope to go farther than ClojureDocs did in a number of ways.

I would like to explore how I find and choose the functions I need, and try to optimize accessing Grimoire accordingly so that I can find the right spanner as quickly as possible. Part of this effort is the recent introduction of a modified Clojure Cheat Sheet (thanks to Andy Fingerhut & contributors) as Grimoire's primary index.

Something else I would like to explore with Grimoire is automated analysis and link generation between examples. In ClojureDocs, (and I admit Grimoire as it stands) examples were static text analyzed for syntax before display to users. As part of my work on Oxcart, I had an idea for how to build a line information table from symbols to binding locations and I'd like to explore providing examples with inline links to the documentation of other functions used.

Finally I'd like to go beyond Clojure's documentation. ClojureDocs and Grimoire at present only present the official documentation of Clojure functions. However some of Clojure's behavior such as the type hierarchy is not always obvious.

< amalloy> ~colls

< clojurebot> colls is http://www.brainonfire.net/files/seqs-and-colls/main.html

Frankly the choice of the word Grimoire is a nod in this direction... a joke as it were on Invoker's fluff and the Archmagus like mastery which I see routinely exhibited on Freenode's Clojure channel of the many ins and outs of the language while I struggle with basic stuff like "Why don't we have clojure.core/atom?" and "why is a Vector not seq? when it is Sequable and routinely used as such?". "Why don't we have clojure.core/sequable?"?

Clojure's core documentation doesn't feature type signatures, even types against Clojure's data interfaces. I personally find that I think to a large degree in terms of the types and structure of what I'm manipulating I find this burdensome. Many docstrings are simply wanting if even present. I think these are usability defects and would like to explore augmenting the "official" Clojure documentation. Andy Fingerhut's thalia is another effort in this direction and one which I hope to explore integrating into Grimoire as non-invasively as possible.


Much of what I have talked about here is work that needs to be done. The first version of Grimoire that I announced originally on the Clojure mailing list was a trivial hierarchical directory structure aimed at users who sorta kinda knew what they were looking for and where to find it because that's all I personally need out of a documentation tool for the most part after two years of non-stop Clojure. Since then I've been delighted to welcome and incorporate criticism and changes from those who have found Grimoire similarly of day to day use, however I think it's important to note that Grimoire is fundamentally user land tooling as is Leiningen and as is Thalia.

As such, I don't expect that Grimoire will ever have any official truck with Rich's sanctioned documentation. This and the hope that we may one day get better official docs mean that I don't really foresee migrating Grimoire off of my personal hosting to a more "clojure-ey" domain. Doing so would lend Grimoire an undue level of "official-ness", forget the fact that I'm now rather attached to the name and that my browser goes to the right page with four keystrokes.


As far as I am concerned, Grimoire is in a good place. It's under my control, I have shall we say a side project as I'm chugging along on "day job" for GSoC and it seems judging by Google Analytics that some 300 other Clojureists have at least bothered to stop by and some 70 have found Grimoire useful enough to come back for more all in the three days that I've had analytics on. There is still basic UI work to be done, which isn't surprising because I claim no skill or taste as a graphic designer, and there's a lot of stuff I'd like to do in terms of improving upon the existing documentation.

Frankly I think it's pretty crazy that I put about 20 hours of effort into something, threw it up on the internet and suddenly for the first time in my life I have honest to god users. From 38 countries. I should try this "front end" stuff more often.

So here's hoping that Grimoire continues to grow, that other people find it useful, and that I manage to accomplish at least some of the above. While working on Oxcart. And summer classes. Yeah.....


Oxcart and Clojure

Well, it's a month into Google Summer of Code, and I still haven't actually written anything about my project, better known as Oxcart beyond what little per function documentation I have written for Oxcart and the interview I did with Eric N.. So it's time to fix that.


Clojure isn't "fast", it's simply "fast enough". Rich, while really smart guy with awesome ideas isn't a compilers research team and didn't design Clojure with a laundry list of tricks that he wanted to be able to play in the compiler in mind. Instead in the beginning, Rich designed a language that he wanted to use to do work, built a naive compiler for it, confirmed that JVM JITs could run the resulting code sufficiently fast, and got on with actually building things.

The consequent of Rich's priorities is that Clojure code is in fact fast enough to do just about whatever you want, but it could be faster. Clojure is often criticized for its slow startup time and its large memory footprint. Most of this footprint is not so much a consequent of fundamental limitations of Clojure as a language (some of it is but that's for another time) as it is a consequent of how the existing Clojure compiler runtime pair operate together.

So Clojure wasn't designed to have a sophisticated compiler, it doesn't have such a compiler, and for some applications Clojure is slow compared to other equivalent languages as a result of not having these things. So for GSoC I proposed to build a prototype compiler which would attempt to build Clojure binaries tuned for performance and I got accepted.

Validating complaints

Okay, so I've made grand claims about the performance of Clojure, that it could be faster and soforth. What exactly do I find so distasteful in the language implementation?

Vars are the first and primary whipping boy. Vars, defined over here, are data structures which Clojure uses to represent bindings between symbols and values. These bindings, even when static at compile time, are interred at runtime in a thread shared global bindings table, and then thread local bindings tables contain "local" bindings which take precedence over global bindings. This is why (clojure.core/alter-var-root!) and the other var manipulation functions in the Clojure standard library have the -! postfix used to annotate transactional memory mutation, because only a transaction can modify the root bindings of a thread shared and thread safe Var.

Now Vars are awesome because they are thread shared. This means that if you drop a REPL in a live program you can start re-defining Vars willy nilly and your program will "magically" update. Why does this work? Because Clojure programs never hold a reference to a "function", instead they hold a thread synchronized var which names a function and get the latest function named by the var every time they have to call that function.

This is great because it enables the REPL driven development and debugging pattern upon which many people rely, however for the overwhelming majority of applications the final production form of a program will never redefine a Var. This means that consequently the nontrivial overhead of performing thread synchronization, fetching the function named by a var, checking that the fn is in fact clojure.lang.IFn and then calling the function is wasted overhead that the reference Clojure implementation incurs every single function call. The worst part about this is that Var has a volatile root which poisons the JVM's HotSpot JIT by providing a break point at function boundaries which the JIT can't inline or analyze across.

IFn is another messy part of Clojure programs. The JVM does not have semantics for representing a pointer to a "function". The result is that Clojure doesn't really have "functions" when compiled to JVM bytecode, instead Clojure has classes providing .invoke() methods and instances of those classes, or more often Vars naming IFns may be passed around as values. This isn't a bad thing for the most part, except that IFns use Vars to implement their .invoke() methods and vars are slow.

The real reason that this can be an issue is why do we need an IFn with multiple invoke methods? Because in Clojure functions can have multiple arities, and the single instance of an IFn could be invoked multiple ways where a JVM Method cannot be.

The real issue with IFns is that they exist at all. Every single class referenced by a JVM program must be read from disk, verified, loaded and compiled. This means that there is a load time cost to each individual class in a program. Clojure exacerbates this cost by not only generating a lot of small classes to implement IFns. When a namespace is required or otherwise loaded by a compiled Clojure program, the Clojure runtime loads foo__init.class, which creates instances of every IFn used to back a Var in the namespace foo and installs those definitions in the global var name table. Note that this loading is single threaded, so all synchronization at load time is wasteful. Also note that if there are top level forms like (println "I got loaded!") in a Clojure program those are evaluated when the namespace containing the form is loaded.

The sales pitch

So what's the short version here? Clojure has Vars in order to enable a dynamic rebinding model of programming which deployed applications do not typically need. Because applications do not tend to use dynamic binding for application code, we can discard Vars and directly use the IFn classes to which the Vars refer. This could be a significant win just because it removes that volatile root on Vars that poisons the JIT.

This opens up more opportunities for playing tricks in the compiler, because we don't really need IFn objects for the most part. Remember that IFn objects are only needed because methods aren't first class on the JVM and to support dynamic redefinitions we need a first class value for Vars to point to. If all definitions are static, then we don't need Vars, so we can find the fully qualified class and method that a given function invocation points to, freeing a Clojure compiler to do static method invocation. This should be a performance win as it allows the JVM JIT to escape type checking of the object on which the method is invoked and it allows the JIT to inline in the targeted method among other tricks.

If we can throw away IFns by implementing functions as say static methods on a namespace "class" rather than having seperate classes for each function, then we cut down on program size in terms of classes which should somewhat reduce the memory footprint of Clojure programs on disk and in memory in addition to reducing load time.


So what is Oxcart? Oxcart is a compiler for a subset of Clojure which seeks to implement exactly the performance hat tricks specified above and a few more. For the most part these are simple analysis operations with well defined limitations. In fact, most of what's required to use Oxcart to compile Clojure code is already built and working in that Oxcart can currently rewrite Clojure programs to aid and execute the above transformations.

Oxcart is also a huge exercise in the infrastructure around the clojure.tools.analyzer contrib library, as Oxcart is the first full up compiler to use tools.analyzer, tools.analyzer.jvm and tools.emitter.jvm as more than the direct compilation pipeline which tools.emitter.jvm implements. This means that Oxcart has interesting representational issues in how passes and analysis data are handled and shared between passes, let alone how the data structure describing the entire program is built and the performance limitations of various possible representations thereof.

So what's Oxcart good for? right now: nothing. Oxcart doesn't have an AOT file emitter yet, and relies on tools.emitter.jvm for eval and as such is no faster for evaling code in a live JVM than Clojure is. At present I'm working on building an AOT emitter which will enable me to start doing code generation and profiling Oxcart against Clojure. I hope to post an initial emitter and a trivial benchmark comparing a pair of mutually recursive math functions between Clojure and Oxcart.

Before you go

I've know I've said this entire time that Oxcart is a Clojure compiler. That's a misnomer. Oxcart doesn't compile Clojure and never will. Clojure has stuff like eval, eval-string, resolve, load-string, load and all the bindings stuff that allow Clojure programmers to reach around the compiler's back and change bindings and definitions at runtime. These structures are not and never will be supported by Oxcart because supporting them would require disabling optimizations. Oxcart also doesn't support non-def forms at the top level. Oxcart programs are considered to be a set of defs and an entry point. Oxcart also assumes that definitions are single. Redefining a var is entirely unsupported, abet not yet a source of warnings.

Some of these differences are sufficiently extreme that I'm honestly on the fence about whether Oxcart is really Clojure or some yet undefined "Oxlang" more in the style of Shen, but for now I'll stick to building a prototype :D


Dogecoin block pricing experiment

As I've explained repeatedly before, the block reward is the mechanism which we as a community use to buy the hashing power which keeps our network relatively independant and secure as an honest processor of users transactions. As we cannot pay miners in fiat somehow, we pay them in Doge in the form of block rewards and transaction fees. This means that we as a community are putting a price on the hashing power used to secure our network. I thought it would be interesting to try and figure out what that number is.

The reason it's interesting is the Dogecoin block reward schedule. To encourage mining and manage the value of the coin, Doge operates on a diminishing rewards block schedule that encourages mining and buying into Doge with hashing power by rewarding early miners more than late miners. However this works only to a point. Mining Doge has to be ROI positive or nobody would bother with it. Mining Doge also has to be ROI competitive with other altcoins or nobody would bother with it. This first requirement I address in this post, and the second requirement I'll address some other time.


This is current (ish) pricing and power consumption information for a variety of different mining hardware. The important number here is the final column, KH/w. As the point of this exercise is to put a Doge price on every block, the power consumption of every KH pointed at mining Doge counts.

 Vendor      Miner              KH/s       Watt   Cost           KH/w  
 KnCMiner    Mini Titan         150,000     400   $5,495.00    375.00  
 KnCMiner    Titan              300,000     800   $9,995.00    375.00  
 Gridseed    ASIC Blade Miner   5,200        70   $1,049.95     74.29  
 Gridseed    ASIC 5-Chip        350           7   $79.95        50.00  
 DualMiner   ASIC USB 2         70            2   $81.99        46.67  
 GAWMiner    War Machine        54,000    1,280   $5,899.95     42.19  
 GAWMiner    Black Widow        13,000      320   $1,799.95     40.63  
 GAWMiner    Fury               1,000        30   $159.95       33.33  
 DualMiner   ASIC USB 1         70            3   $98.00        28.00  
 Radeon      R9 290X            850         295   $579.99        2.88  
 NVIDIA      GTX 770            220         230   $329.99        0.96  
 NVIDIA      GTX 560 Ti         150         170   $120.00        0.88  
 Average                                                      89.1525  

At present Doge is near 1K Doge = 0.435 USD, or measured in doge/usd 2298.85.

Measuring the price per block

power costs 0.1220 USD/KWhr at US national average prices, so lets run some numbers.

 1 (ns doge
 2   (:require [meajure]
 3             [clojure.algo.generic.arithmetic
 4              :refer [+ - / *]]))
 6 ;; constants and definitions
 7 ;;------------------------------------------------------------------------------
 9 (def kw-per-w
10   #meajure/unit [1/1000 :kwatt 1 :watt -1]) ;; definition
12 (def hr-per-block
13   #meajure/unit [1/60 :hour 1 :block -1]) ;; definition
16 ;; simulation variables
17 ;;------------------------------------------------------------------------------
19 (def kh-per-watt
20   #meajure/unit [375
21                  :khash :watt -1 :second -1]) ;; variable
23 (def usd-per-watt-block
24   (* #meajure/unit [0.122 :usd :kwatt -1 :hour -1] ;; variable
25      kw-per-w 
26      hr-per-block))
28 (def doge-per-usd
29   #meajure/unit [2298.85 :doge 1 :usd -1]) ;; variable
32 ;; computation
33 ;;------------------------------------------------------------------------------
35 (def price-per-watt-block
36   (/ doge-per-usd kh-per-watt))
38 (print "DOGE per watt-block |"
39        price-per-watt-block)
41 (def price-per-kh
42   (* usd-per-watt-block 
43      price-per-watt-block))
45 (print "DOGE per KH         |"
46        price-per-kh)
48 ;; break even block reward
49 (print "DOGE for hashrate   |"
50   (* price-per-kh
51      #meajure/unit [50 :mega :khash :second -1]))
DOGE per watt-block | #meajure/unit [25.785592103418296,
                                     :second, :watt, :khash -1, :usd -1, :doge]

DOGE per KH         | #meajure/unit [5.243070394361721E-5,
                                     :doge, :khash -1, :second, :block -1]

DOGE for hashrate   | #meajure/unit [2621.5351971808605,
                                     :doge, :block -1]


 Miner              KH/s       Watt      KH/w   Min. block reward  
 Mini Titan         150,000     400    375.00       623.243777777  
 Titan              300,000     800    375.00       623.243777777  
 ASIC Blade Miner   5,200        70     74.29      3158.329954954  
 ASIC 5-Chip        350           7     50.00      4674.328333333  
 ASIC USB 2         70            2     46.67      5007.851224910  
 War Machine        54,000    1,280     42.19      5539.616417790  
 Black Widow        13,000      320     40.63      5752.311510378  
 Fury               1,000        30     33.33      7012.193719371  
 ASIC USB 1         70            3     28.00      8347.014880952  
 R9 290X            850         295      2.88     81151.533564810  
 GTX 770            220         230      0.96    243454.600694440  
 GTX 560 Ti         150         170      0.88    265586.837121212  
 Average                              89.1525           28935.106  

In order to recieve any return on invested compute power, a miner must find blocks. If the difficulty is too high, then it isn't feasable to find blocks and mining is ROI negative. This is why Wafflepool and other multipools switch coins: they mine for as long as it's ROI positive to do so and once the difficulty adjusts rendering it no longer ROI positive to do so they bail out and head for greener pastures.

So, Dogecoin has a minimum block reward of 10,000 Doge, but we won't see that until January. At present we're at a 125,000 Doge block reward. Looking at this table, that indicates that Nvidia GPUs are now absolutely ROI negative. I won't be mining any more for this reason.

You don't just get coins for mining tho, you get coins for finding blocks. Assuming that the Scrypt proof of work function is inviolate, what fraction of the hashrate do each of these miners need to represent at the current block reward to break even?

 Miner                KH/s   Min. block reward   Hashrt. Frac.   Hashrt. (KH/s)  
 Mini Titan         150000       623.243777777    4.9859502e-3       30084.5360  
 Titan              300000       623.243777777    4.9859502e-3       60169.0730  
 ASIC Blade Miner     5200      3158.329954954     0.025266640      205804.9700  
 ASIC 5-Chip           350      4674.328333333     0.037394627        9359.6334  
 ASIC USB 2             70      5007.851224910     0.040062810        1747.2564  
 War Machine         54000      5539.616417790     0.044316931     1218495.9000  
 Black Widow         13000      5752.311510378     0.046018492      282495.1300  
 Fury                 1000      7012.193719371     0.056097550       17826.0900  
 ASIC USB 1             70      8347.014880952     0.066776119        1048.2790  
 R9 290X               850     81151.533564810      0.64921227        1309.2790  
 GTX 770               220    243454.600694440       1.9476368         112.9574  
 GTX 560 Ti            150    265586.837121212       2.1246947          70.5983  
 Average                             28935.106                                   

And what's our current hashrate? 48GH/s, or 48000000 KH/s.


I think the writing's on the wall here. If everyone on the Dogecoin network was running the War Machine miner, our hashrate is a factor of 48 times higher than would be break even. What does this mean for our future? As block rewards fall, either the price of Doge will rise due to adoption as a unit of trade which will drive down the minimum block reward numbers, decreasing the hashrate fractions and increasing the global break even network hashrate.

What do I think this means? Well after fooling around with this math and the price of Doge variable the computed maximum network hashrate goes over 45GH only when the price of Doge increases to about 1523 Doge/USD, or at current market prices about 130 Satoshi BTC. If the price of Doge doesn't rise back to the 130 mark, I expect that we will wee our hashrate slowly but steadily fall until it reaches a point where miners perceive that they are breaking even, being very likely under 1GH/s according to these numbers.

If we want Dogecoin to grow into something more than a tip currency I think that we missed the boat when we turned down merged mining with LTC. We have to secure our hashrate for the long haul that it'll take us to build real market valuation through acceptance as a unit of trade. Merged mining with LTC would have achieved that goal. So what do I think the outlook is? I think we're gonna run out of fuel in earth orbit rather than get to the moon.