Tuesday, January 6, 2015

2 recent Emscripten stories

In case you missed them:

First, DOSBox is an open source emulator that lets you run old DOS programs (which means, mostly games ;) . It simulates a PC compatible machine, complete with Intel CPU and typical graphics and audio cards of the time. DOSBox was ported by dreamlayers to Emscripten a while ago, which was an impressive achievement, and let you run practically any old DOS game right in your browser, no plugins required. Recently, two separate projects have picked it up and are running with it:
  • The Internet Archive has an MS-DOS showcase (for more details see Jason Scott's blog). At the time of this post, it has over 2,400 programs up. This is amazing for helping to preserve an entire era in computing history.
  • The ever-porting caiiiycuk created js-dos.com. This has a much smaller selection of programs, but the site itself is a tribute to the days of MS-DOS, really very nice work - it brings back some of the feeling of those times.
Second, Ogre3D, one of the top open source 3D rendering engines, supports exporting to the web using Emscripten as of version 1.10.0. This is exciting because while there are many 3D open source graphics engines, Ogre3D is one of the most general-purpose and most full-featured. Also, I remember that around 3 years ago Ehsan tried to port Ogre3D, but at the time Emscripten was far too immature. It's great to see it ported now, and officially!

Tuesday, July 15, 2014

Massive, a new work-in-progress asm.js benchmark - feedback is welcome!

Massive is a new benchmark for asm.js. While many JavaScript benchmarks already exist, asm.js - a strict subset of JavaScript, designed to be easy to optimize - poses some new challenges. In particular, asm.js is typically generated by compiling from another language, like C++, and people are using that approach to run large asm.js codebases, by porting existing large C++ codebases (for example, game engines like Unity and Unreal).

Very large codebases can be challenging to optimize for several reasons: Often they contain very large functions, for example, which stress register allocation and other compiler optimizations. Total code size can also cause pauses while the browser parses and prepares to execute a very large script. Existing JavaScript benchmarks typically focus on small programs, and tend to focus on throughput, ignoring things like how responsive the browser is (which matters a lot for the user experience). Massive does focus on those things, by running several large real-world codebases compiled to asm.js, and testing them on throughput, responsiveness, preparation time and variance. For more details, see the FAQ at the bottom of the benchmark page.

Massive is not finished yet, it is a work in progress - the results should not be taken seriously yet (bugs might cause some things to not be measured accurately, etc.). Massive is being developed as an open source project, so please test it and report your feedback. Any issues you find or suggestions for improvements are very welcome!

Tuesday, June 3, 2014

Looking through Emscripten output

Emscripten compiles C and C++ into JavaScript. You are probably about as likely to want to read its output as you would want to read output from your regular C or C++ compiler - that is, you probably don't! Most likely, you just want it to work when you run it. But in case you are curious, here's a blogpost about how to do that.

Imagine we have a file code.c with contents
#include <stdio.h>

int double_it(int x) {
  return x+x;
int main() {
  printf("hello, world!\n");
Compiling it with emcc code.c , we can run it using node a.out.js and we get the expected output of hello, world! So far so good, now lets look in the code.

The first thing you might notice is the size of the file: it's pretty big! Looking inside, the reasons become obvious:
  • It contains comments. Those would be stripped out in an optimized build.
  • It contains runtime support code, for example it manages function pointers between C and JS, can convert between JS and C strings, provides utilities like ccall to call from JS to C, etc. An optimized build can reduce those, especially if the closure compiler is used (--closure 1): when enabled, it will remove code not actually called, so if you didn't call some runtime support function, it'll be stripped.
  • It contains large parts of libc! Unlike a "normal" environment, our compiler's output can't just expect to be linked to libc as it loads. We have to provide everything we need that is not an existing web API. That means we need to provide a basic filesystem, printf/scanf/etc. ourselves. That accounts for most of the size, in fact. Closure compiler helps with the part of this that is written in normal JS, for the part that is compiled from C, it gets stripped by LLVM and is minifed by our asm.js minifier in optimized builds.
For comparison, an optimized build with closure compiler, using -O2 --closure 1, is 1/4 the original size. This still isn't tiny, mainly due to the libc support we have to provide. In a medium to large application, this is negligible (especially when gzipped, which is how it would be sent over the web), but for tiny "hello world" type things it is noticeable.

(Side note: We could probably optimize this quite a bit more. It's been lower priority I guess because the big users of Emscripten have been things like game engines, where both the code and especially the art assets are far larger anyhow.)

Ok, getting back to the naive unoptimized build - let's look for our code, the functions double_it() and main(). Searching for main leads us to
function _main() {
  var $vararg_buffer = 0, label = 0, sp = 0;
  sp = STACKTOP;
  $vararg_buffer = sp;
  STACKTOP = sp;return 0;
This seems like quite a lot for just printing hello world! It's because this is unoptimized code. So let's look at an optimized build. We need to be careful, though - the optimizer will minify the code to compress it, and that makes it unreadable. So let's build with -O2 -profiling, which optimizes in all the ways that do not interfere with inspecting the code (to profile JS, it is very helpful to read it, hence that option keeps it readable but still otherwise optimized; see emcc --help for the -g1, -g2 etc. options which do related things at different levels). Looking at that code, we see
function _main() {
  var i1 = 0;
  i1 = STACKTOP;
  _puts(8) | 0;
  STACKTOP = i1;
  return 0;
There is some stack handling overhead, but now it's clear that all it's doing is calling puts(). Wait, why is it calling puts() and not printf() like we asked? The LLVM optimizer does that, as puts() is faster than printf() on the input we provide (there are no variadic arguments to printf here, so puts is sufficient).

Keeping Code Alive

What about the second function, double_it()? There seems to be no sign of it. The reason is that LLVM's dead code elimination got rid of it - it isn't being used by main(), which LLVM assumes is the only entry point to the entire program! Getting rid of unused code is very useful in general, but here we actually want to look at code that is dead. We can disable dead code elimination by building with -s LINKABLE=1 (a "linkable" program is one we might link with something else, so we assume we can't remove functions even if they aren't currently being used). We can then find
function _double_it(i1) {
  i1 = i1 | 0;
  return i1 << 1 | 0;
(Note btw the "_" that prefixes all compiled functions. This is a convention in Emscripten output.) Ok, this is our double_it() function from before, in asm.js notation: we coerce the input to an integer (using |0), then we multiply it by two and return it.

We can keep code alive by calling it, as well. But if we called it from main, it might get inlined. So disabling dead code elimination is simplest. You can also do this in the C/C++ code, using the C macro EMSCRIPTEN_KEEPALIVE on the function (so, something like   int EMSCRIPTEN_KEEPALIVE double_it(int x) {  ).

C++ Name Mangling

Note btw that if our file had the suffix cpp instead of c, things would have been less fun. In C++ files, names are mangled, which would cause us to see
function __Z9double_iti(i1) {
You can still search for the function name and find it, but name mangling adds some prefixes and postfixes.

asm.js Stuff

Once we can find our code, it's easy to keep poking around. For example, main() calls puts() - how is that implemented? Searching for _puts (again, remember the prefix _) shows that it is accessed from
var asm = (function(global, env, buffer) {
  'use asm';
  // ..
  var _puts=env._puts;
  // ..
  // ..main(), which uses _puts..
  // ..
})(.., { .. "_puts": _puts .. }, buffer);
All asm.js code is enclosed in a function (this makes it easier to optimize - it does not depend on variables from outside scopes, which could change). puts(), it turns out, is written not in asm.js, but in normal JS, and we pass it into the asm.js block so it is accessible - by simply storing it in a local variable also called _puts. Looking further up in the code, we can find where puts() is implemented in normal JS. As background, Emscripten allows you to implement C library APIs either in C code (which is compiled) or normal JS code, which is processed a little and then just included in the code. The latter are called "JS libraries" and puts() is an example of one.


You don't need to read the code that is output by any of the compilers you use, including Emscripten - compilers emit code meant to be executed, not understood. But still, sometimes it can be interesting to read it. And it's easier to do with a compiler that emits JavaScript, because even if it isn't typical hand-written JavaScript, it is still in a fairly human-readable format.

Thursday, November 14, 2013

C++ to JavaScript: Emscripten, Mandreel, and now Duetto

There are currently at least 3 compilers from C/C++ to JavaScript: Emscripten, Mandreel and the just-launched Duetto. Now that Duetto is out (congrats!), it's possible to do a comparison between the 3, which ends up being interesting as there are some big differences but also big similarities between them.

Disclaimer: I founded the Emscripten project and work on it, so unsurprisingly I have more knowledge about that one than the other two. It is likely that I got some stuff wrong here, please correct me if so! :)

A quick overview is in the following table:


Licenseopen source (permissive)proprietarymixed (see below)


ArchitectureLLVM IR backend (external)LLVM tblgen backendLLVM IR backend

Memory modelsingleton typed arraysingleton typed arrayJS objects

C/C++ compatibilityfullfullpartial

LLVM IR optsfullfullpartial

Backend optslow-level JSLLVM?

Call modelJS nativeC stack in typed arrayJS native

Loop recreationEmscripten relooperCustom (relooper-inspired)Emscripten relooper

APIsstandard C (SDL, etc.)customHTML5-based

Othernon-JS targets tooclient-server

More detail on each of those factors:
  • License: Licenses range from Emscripten which is fully open source with a permissive license (MIT/LLVM), to Mandreel which is fully proprietary, to Duetto which is somewhere in the middle - the Duetto core compiler has a permissive open source license (LLVM), the Duetto libraries+headers are dual licensed GPL/proprietary, and Duetto premium features are proprietary.

    Duetto includes core system libraries that must be linked together with your code, like libc and libcxx. Those have been relicensed to the GPL (from original licenses that were permissive, like MIT for libcxx), and this implies that the Duetto output for a project will be a mix of that project's code plus GPL'd library code, so it looks like the entire thing must be GPL'd, or you must buy a proprietary license.
  • Based on: All of these compilers use LLVM. It has become in many ways the default choice in this space.
  • Architecture: While they all use LLVM, these compilers use it in quite different ways. Mandreel has an LLVM backend using the shared backend codegen infrastructure, using tblgen, the selection DAG, etc. - this is the way most LLVM backends are written, for example the x86 and ARM backends, so Mandreel's architecture is the closest to a "typical" LLVM-based compiler. On the other hand, Duetto implements a backend inside LLVM that does not use that shared backend code, and instead basically calls out into Duetto code that processes the llvm::Module itself into JavaScript, which means it processes LLVM IR (which is entirely separate from the backend selection DAG: LLVM IR is lowered into the selection DAG). Emscripten also processes LLVM IR, like Duetto, but does so outside of LLVM itself - it parses LLVM IR that is exported from LLVM.
  • Memory model: Mandreel and Emscripten share a similar memory model, using a singleton typed array with aliasing views to emulate something extremely similar to how LLVM IR (and C programs) view memory. That includes aliasing (you can load a 32-bit value and split it into two 16-bit values, or you can read two 16-bit values and get the same result), as well as the fact that pointers are all interchangeable (aside from casting rules, they are just numeric values, that refer to a place in a single continguous memory space) and unsafe accesses (if you have a pointer to an int, even if there is no valid data after it, you can still read x[1] or x[17]; hopefully the program made sure there is in fact valid data there!). The odd one out is Duetto which uses JS objects, so each C++ class you instantiate has its own object. This generates more "normal"-looking JS, at least at first glance - there are objects, there are properties accessed on objects - but it is complex to make it match up to the LLVM memory model (for example, pointers to objects in Duetto need to be able to represent both the JS object holding the data, but also a numeric offset so that pointer math works, etc.), which adds overhead (see more discussion in the Performance Analysis section below).

    Another result of this memory model is that Duetto allocates and frees JS objects all the time. This has both advantages and disadvantages: On the plus side it means that objects no longer referred to can be reclaimed by the JS engine's garbage collector (GC), so at any point in time the total memory used can be just what is currently alive - I say "can be" and not "will be" because that assumes that the JS engine has a moving GC (many do not), and even so this will only be true right after a GC, and not all the time. And on the minus side for Duetto it means that the overhead of object allocation and collection is present, whereas for Emscripten and Mandreel it is not - in those compilers, objects are just ranges in the singleton typed array, the JS GC is not even aware of them.

    Furthermore, each object will take more memory in the Duetto model: JS objects do not just store the raw data in them, but also have various overhead related to the VM (for example, most VMs have a pointer to "shape" info for the object, to optimize accesses to it, and other stuff), whereas in Emscripten and Mandreel literally the same amount of memory is used as C would use. On the other hand, Emscripten and Mandreel have a typed array that may not be fully used at all times, which can be wasteful. That limitation can be partially removed, for example Emscripten has an option to grow the heap only when required (through the POSIX sbrk command), and similarly it could be shrunk as well, which would leave only fragmentation as a concern - but fragmentation is definitely a real concern for long-running code. So which is better will really depend on the application and use case. For short-running code it seems highly likely that Emscripten and Mandreel will use less memory, whereas for long-running code it might go either way.

    An advantage of the Duetto model is security: if you read beyond the limit of one array, you will not read data from another. Basically, Duetto is compiling C/C++ into something more like Java or C#, where you do bounds checks on each array and so forth. This can prevent lots of bugs, obviously. However, it is foreign to how C/C++ normally work - this is an advantage not over Emscripten and Mandreel, but over all C++ compilers, it is an inherent difference compared to the normal way C++ compilers are implemented. It does come with some security benefits, however it also has downsides in terms of performance (those bounds checks just mentioned) and compatibility (see next section). Personally, I would argue that if you want that type of security, you would be better off writing code in C# or Java in the first place (which have compilers to JS, for example JSIL and GWT), but I do think that what Duetto is doing is interesting - it's almost like making a new language.
  • Compatibility: Emscripten and Mandreel should be able to run practically any C/C++ codebase, because as mentioned in the previous point, their memory model is essentially identical to LLVM's, assuming the code is reasonably portable. (As a rule of thumb, if C code is portable enough to run on both x86 and ARM, it will work in Emscripten and Mandreel.) Duetto however has a model which is different than LLVM's, which can require that codebases be ported to it. How much of a problem this is for Duetto will depend the codebases people try to port. There are very portable codebases like Bullet, but much real-world code does depend on memory aliasing and so forth, for example Cube 2. Emscripten experimented with various memory models in the past, and we encountered significant challenges with all of them except the C-like model that Emscripten and Mandreel currently use, which is most compatible with normal C/C++. As a quick test out of curiosity, I tried to build just the script module in Cube 2 using Duetto. I received multiple errors (for example invalid field types in unions, etc.) that due to the complexity of Cube 2 did not look trivial to fix.

    Note that for a new codebase this wouldn't matter - people writing new Duetto apps will just limit themselves to what Duetto can do.
  • LLVM IR optimizations: The previous point spoke about compatibility with C++ code. A related matter is compatibility with LLVM IR: While all these compilers rely on a frontend to transform source code into LLVM IR, so that only the subset of LLVM IR that the frontend generates is relevant, the LLVM optimizer can perform sophisticated transformations from LLVM IR to other LLVM IR. Therefore to be able to utilize the LLVM IR optimizer to its full extent, it is best to support as wide a range of LLVM IR as possible, otherwise you need to tell the optimizer to limit what it does.

    Compatibility with LLVM IR depends significantly on the memory model, just like compatibility with C++: the optimizer assumes memory aliases just like C assumes it does, and so forth. So Emscripten and Mandreel, which have a memory model that is practically identical to LLVM's, can benefit from the full range of LLVM optimizations to be run, including the ones that assume aliasing and other non-safe aspects of the C memory model. Duetto on the other hand has a different memory model, one not compatible with all C code, and similarly Duetto is not compatible with all LLVM IR. That is, Duetto runs into problems either when source code would need something in LLVM IR that it cannot handle, or when the LLVM optimizer generates IR it can't handle. For source code, you can port it so it is in the proper subset; for optimizations, you need to either disable or limit them.

    It's possible that Duetto does not lose much here - most LLVM IR optimizations do not generate odd things like 512-bit integers nor do they rely heavily on C's memory model - so limiting LLVM's optimizer to what Duetto can handle might work out fine. However, a concern here is that Duetto needs to make sure that all existing and future IR optimizations do not depend on things it can't handle. Only if you have the exact same assumptions as LLVM IR has about memory and so forth, do you have a guarantee that all optimizations are valid for you.
  • Backend optimizations: Only Mandreel benefits from LLVM's backend optimizations (optimizations done after the LLVM IR level), because only it is implemented as an LLVM backend using the shared backend infrastructure, as discussed earlier. That gives Mandreel access to things like register allocation and various other optimizations. These can be quite significant in native (x86, ARM, etc.) builds, but it is less clear how crucial they are on the web, since JS engines will do their own backend-type optimizations (register allocation, licm, etc.) anyhow. In place of those, both Emscripten and Duetto perform their own "backend" optimizations. Emscripten's focus on low-level code (which makes sense given it's memory model, as mentioned before), and includes JS-specific passes to remove unneeded variables, shrink overly-large functions, etc. (see my talk from last week at the LLVM developer's conference, also this). I am not familiar with the details of Duetto's backend optimizations, but from their blogposts it sounds like they focus on optimizing things for their memory model (avoiding unnecessary object creation, etc.). If any Duetto people are reading this, I would love to learn more.
  • Call model: Both Emscripten and Duetto use native-looking function calls:   func(param1, param2)   etc. This uses the JS stack and tends to be well-optimized in JS engines. Mandreel on the other hand uses the C stack, that is, it writes to global locations before the call and reads them back in the call. Mandreel's approach is similar to how CPUs work, but I believe no JS engine is able to optimize parameters in that call model into registers, so it will mean memory accesses which are slower. Inlining (by LLVM or by the JS engine) will reduce that overhead for Mandreel, but it will still be noticeable in some workloads, and also is bad for code size.
  • Loop recreation: LLVM IR contains basic blocks and branch instructions, which are easy to model with gotos, but JS lacks goto statements. So all these compilers must either use large switch statements in loops (the closest to a goto as we can get), or recreate loops and ifs. They all do the latter. Emscripten has what it calls its Relooper for that purpose, and I see that Emscripten's code is reused in Duetto which is nice. Mandreel also recreates loops, in a way inspired by the Relooper algorithm but not using the same code.
  • APIs: Emscripten implements typical C APIs like SDL, glut, etc., so often entire apps can just recompile and run. Mandreel on the other hand has its own API which apps need to be written against. It is custom and designed to be portable not just across native and JS builds, but also various other platforms that Mandreel can target, like Flash. Finally Duetto supports accessing HTML APIs from C++, so they are standard APIs in the sense of the web, but not ones that existing code might already be using. For new apps, Duetto's approach is interesting, but for existing ones I think Emscripten's is preferable. Of course there is no reason to not do both in a single compiler. (In fact it could be possible for Emscripten and Duetto to collaborate on that, but looks like Duetto's WebGL headers are GPL licensed which would be a problem for Emscripten, which only includes permissively licensed open source code like MIT.)

    Another factor to consider regarding APIs is native builds. It is often very useful to compile the same app to both native and HTML5 builds, for performance comparisons, benefiting from native debugging and profiling tools, etc. If I understand Duetto's approach properly, it can't compile apps natively because Duetto apps use things like WebGL which are not present as native libraries. Emscripten and Mandreel on the other hand do allow such builds.
  • Other: A few other points of interest are that while Emscripten is purely an LLVM to JS compiler, Mandreel and Duetto have other aspects as well. As already mentioned, Mandreel has a custom API that apps can be developed against, and it is portable across a very large set of target platforms, not just JS. And Duetto is meant not just for client JS but also to be able to run C++ on both sides of a client-server app, sort of like how node.js lets you run the same code on both sides as well, but using C++.

Interim Summary

As mentioned in the introduction, there are big differences but also big similarities here. Often two out of the three compilers are very similar or identical on a particular aspect, but which two changes from aspect to aspect. Overall I think it's interesting how big the differences are, probably larger than the differences between modern JS engines or modern C compilers! I suspect that is because compiling C++ to JS is a newer problem and the industry is still figuring out the best way to do it.

Performance Analysis

What you don't see in this blogpost are benchmarks. That's intentional, because it's very hard to fairly benchmark code across compilers as different as this. It's practically a given that benchmarks can be found that show each compiler to be best, so I won't bother to do that. Instead, this section contains an analysis of how the different compilers' architectures will affect performance, to the best of my understanding.

Overall the biggest factor responsible for performance is likely the memory model. Over the last 3 years Emscripten tried various approaches including JS objects, arrays, typed arrays in aliased and non-aliased modes, arrays with "compressed indexes" (aka QUANTUM_SIZE=1), and finally ended up on aliased typed arrays - the model described above, and shared with Mandreel - because they were greatly superior in performance. They are better both because they model memory essentially the same way LLVM does, so the full set of LLVM optimizations can be run, and also because being low-level they are very easy for JS engines to optimize - reads and writes in LLVM IR become reads and writes of typed arrays in JS, which can become simple reads and writes from memory in modern JS engines, and so forth. When emitting the asm.js subset of JS, this is even more precise (as it avoids cases where reads can be out of bounds and return undefined, for example, which would have meant it isn't a simple memory read any more), and for those reasons this approach of compiling LLVM to JS can get pretty close to native speed.

I would be surprised if Duetto gets to that level of performance. It uses property accesses on JS objects, which is far more complex than typed array reads and writes - instead of a simple read or write from memory, JS object property accesses are optimized using hidden class optimizations, PICs, and so forth. These can be fast in many cases, but as heuristic-based optimizations they are unpredictable. They also require overhead to be performed, both in terms of time and memory. Furthermore, as we are not just reading and writing numbers (as in the Emscripten/Mandreel model) but also references to JS objects, things like write barriers may be executed in the VM (or if not, then more time would be spent GCing). And again, all of this is competing with reads and writes from typed arrays, which are the among the simplest thing to optimize in JS since their type is known and they are just flat. It is true that in the best case a JS VM can turn a property access on a JS object into a direct read from memory, but even a statically typed VM like the JVM generally doesn't manage to run at C-like speeds, and here we are talking about dynamically-typed VMs which have a much more complex optimization problem before them.

Perhaps the crux of the matter is that Duetto basically transforms a C++ codebase so it runs in a VM using a relatively safe memory model and GC. There have been various such attempts - Managed C++, Emscripten's and other JS compilers' attempts in this space as mentioned before, etc. - and overall we know this is possible, if you limit yourself to a subset of C++. We also know it has benefits, such as somewhat higher security as mentioned earlier on. But that increased security stems from the compiled C++ being more like compiled Java and C#, because just like them it runs in a VM, is JITed, uses a GC, benefits from PICs, does bounds checks for safety, etc. In other words, I believe that the Duetto model will in the best case approach the performance of Java and C#, while the Emscripten/Mandreel model will in the best case approach the performance of C and C++. Furthermore, their ability to reach the best-case performance is not identical: Approaching C and C++ speed is far simpler because the model is simpler, and we have in fact already mentioned that the Emscripten/Mandreel approach can indeed approach that limit in many cases (see link above). The Duetto model may be able to reach the speed of Java and C#, but that is both more complex and yet unproven.

Perhaps I am missing something, and there is a reason Duetto-compiled C++ running on a JavaScript VM can outperform the JVM and .NET? That seems unlikely to me, because if it were possible to do so, then surely the JVM and .NET would have done whatever allows such speedups already - assuming C++ in Duetto's VM-like model is generally analogous to the JVM/.NET model, which is my best guess for the reasons mentioned before. Certainly it is possible that I overlooked something here and there is in fact a fundamental difference, please correct me if so.

I do believe that Duetto can outperform typical handwritten JS. It has two advantages over that: First, it can run the LLVM optimizer on the code, and second, it generates JS that is implicitly statically typed. For example, if the code contains a.x then x should always be of the same type, since it originated from C++ code that had that property (C unions are a complication for this but that is solvable as well, with some overhead). In hand-written JS, on the other hand, it is easy to do things that break that assumption. Again, the model Duetto uses is closer to C# or Java where there are objects and a garbage collector and so forth, like JS, but at least types are static, unlike JS. And modern JS engines are quite good at looking for hidden static types, so Duetto should benefit from those optimizations.

One last note on performance: In small benchmarks the difference between C/C++, Java/C#,  and JS can be quite small. JS VMs as well as the JVM and CLR have JITs that can often optimize small loops and such into extremely efficient code, competitive with C/C++ (and LuaJIT can even outperform it in some cases). Things change, however, with large codebases. The larger the codebase, the harder dynamic, heuristic-based optimizations are to perfect; this is true for Java and C# and far more true for JS. For example, a small inner loop containing a temporary object allocation in JS might be analyzed to not escape, and turn into just some constant space on the stack. However, in a large codebase with many function calls, including functions too large and/or too complex to inline, etc., such analyses are often less productive, whereas in C/C++, they will still be on the stack in this example. As another example, JS engines need to analyze types dynamically, and in small functions often do so very well, but the larger the relevant code, the higher the risk for any imprecision in the analysis to "leak" everywhere else (for example, one careless function returning a double instead of an int will send a double into all the callers), whereas in C/C++ there is no such risk. (asm.js tries to get as close as possible to C/C++ by ensuring by a type system that there are type coercions in all necessary places, which would avoid the problem just described - that double would be immediately turned into an int in all callers.)

For all these reasons, small benchmarks often do not show differences, but large ones can. So how fast a compiler is depends a lot on what kind of code is compiled with it. When Duetto is used on a small benchmark, I would not be surprised if it performs very well. My concerns are mainly about large codebases, which I don't think I've seen Duetto report numbers on yet. (By "large" I mean something like Cube 2 or Epic Citadel, hundreds of thousands of lines of code, consisting of many different components, etc.)

General thoughts on Duetto

If Duetto can in fact get close to the speed of Java and C#, that's good enough for many applications, even if it is slower than C and C++ (and Emscripten and Mandreel). And in some cases Java and C# are fairly comparable to C and C++ anyhow. Finally, of course performance is not the only thing that matters when developing applications.

Putting aside performance, Duetto's approach has some advantages. One nice thing about Duetto is that it uses Web APIs in C++, so they might avoid a little overhead that Emscripten and Mandreel have. For example, Emscripten and Mandreel translate OpenGL calls into WebGL calls, while Duetto will have what amounts to translating WebGL into WebGL. That's definitely simpler, and it might be faster in some cases.

Another interesting aspect of Duetto is that since C++ objects become JS objects, this might make it easier to interoperate with normal JS outside of the compiled codebase. It seems like this could work in some cases, however if the codebase is minified then I don't see how this is possible, without either duplicating properties or using wrapper code, which basically gets to the same position Emscripten is in this regard.

Finally, the client-server thing that Duetto does looks quite unique, and it will be interesting to see if people flock to the idea of doing C++ for webservers, and to integrate that code with compiled C++ on the client in JS. That's an original idea AFAIK, and it's cool.

In summary, congratulations to Duetto on launching! Nice to see new and interesting ideas being tried out in the C++/JS space.

Tuesday, November 12, 2013

Two Recent Emscripten Demos

Whenever I do a presentation I try to have at least one new demo, here are two that I made recently:
  • 3 screens demo, for the Mozilla 2013 Summit. The demo is of 3 screens inside the BananaBread 3D first person shooter, a port of Sauerbraten. One screen shows a list of recent tweets, another shows a video, and the third shows a playable Doom-based game (see details here). So you can play Doom while you play Sauerbraten ;) Controls are a little tricky, but see the demo page itself for how to play each game.
  • Clang in the browser, for the LLVM Developer's Conference. The name says it all, this is basically the entire clang C++ compiler running in JS. The port isn't very polished (I didn't try to optimize it, you need to reload the page to recompile, etc.), I basically just got it running.

Thursday, August 22, 2013

Outlining: a workaround for JITs and big functions

Just In Time (JIT) compilers, like JavaScript engines, receive source code and compile it at runtime. That means that end users can notice how long things take to compile, and avoiding noticeable compilation pauses is important. But JavaScript engines and other modern JITs perform optimizations whose complexity is worse than linear, for example SSA analysis and register allocation. That means that aside from total program size, function size is important as well: the time it takes to compile your program may be mostly affected by a few large functions that have lots of code and variables in them. If compilation is N^2 per function (where N is the number of AST nodes or variables or such), then having function sizes

  100, 100, 100, 100, 100, 100, 100, 100, 100, 100

is much better than


even though the total amount of code is the same (10 functions of size 100 take 10 * 100^2 or 100,000 units of time, while a single function of size 1000 takes 1,000,000).

Now, very large functions are generally rare, since most people wouldn't write them by hand. But automatic code generators can create them, as well as compilers that perform optimizations like inlining (e.g., closure compiler, LLVM as used by emscripten and mandreel, etc.).

What do JavaScript engines do with very large functions? Generally speaking, since compiling them takes very long, JITs have just not fully optimized them and left them in the interpreter or baseline JIT. This avoids the long compilation pause, but leaves the code running slower. As more JS engines add background compilation, the compilation pause is not as noticeable, but it still means a significant amount of additional time that the code executes before it is fully optimized.

This problem is not unique to JavaScript engines, of course, it is a concern for any JIT that does complex optimizations, for example the JVM. However, JavaScript engines are particularly concerned with noticeable compilation pauses because they usually run on end users' machines (as opposed to remote servers), and they run arbitrary content off the web.

What can we do about this? It could be solved in one of two general ways, either in the JavaScript engine, or by preprocessing the code ahead of time.

In the JavaScript engine, as already mentioned before, background compilation makes it feasible to compile even big and slowly-compiling functions, since the pause is not directly noticeable. But this still means a long delay before the fully-optimized version runs, and the background thread consumes battery power and perhaps competes with other threads and processes. Also, even in JavaScript engines with background compilation, there are often limits still present, just to avoid the risk of a background thread running a ridiculously long time. Finally, a JavaScript engine might be able to compile separate functions in parallel but not parallelize inside a single function. So all of these are partial solutions at best.

Another JavaScript engine option is to not compile entire functions at a time. SpiderMonkey experimented with this using chunked compilation, basically parts of a function could be compiled separately. This was done successfully in JaegerMonkey, but it is my understanding that it turned out to be quite complex and bug prone, and deemed not worth doing in the newer IonMonkey.

(Of course the best JavaScript engine solution is to just make compilation faster. But huge amounts of work have already gone into that in all modern JavaScript engines, so it is not realistic to expect sudden large improvements there.)

That leaves the other option, of preprocessing the code ahead of time. One way in emscripten for example is to tell LLVM to not perform inlining on some part of your code (if inlining is the cause of a big function), but in general there is not always a simple solution.

Another preprocessing option is to work at the JavaScript level and break up huge functions into smaller pieces. I originally thought this would be too complicated and/or generate too much overhead, but I was eventually convinced to go down this route. And a good thing too ;) because even without much tuning, I think the results are promising:

The X axis is the outlining limit, or how large a function size is acceptable before we start to break it up into smaller pieces, by taking chunks of it and moving them outside into a new function - sort of the opposite of inlining, hence outlining. The number itself is the amount of AST nodes. 0 is a special value meaning we do not do anything, then as you move to the right we do more and more breaking up, as we aim for smaller maximal function sizes.

The Y axis is time (in milliseconds), and each measurement has both compilation and run time. Compilation measures the total time needed to compile all the code by the JavaScript engine, and runtime is how long the benchmark - a SQLite testcase - takes to run. (To measure compilation time, I looked at startup time in Firefox, which is convenient because Firefox does ahead of time (AOT) compilation of the asm.js subset of JavaScript, giving an easy way to measure how long it takes to fully optimize all the code in the program. In non-AOT compilers there would be less startup time of course, but as explained before the costs of long compilation time would still be felt: either functions would not be optimized at all, or compile slowly either on the main thread or a background thread, etc. - those effects are less convenient to measure, but still there.)

The benchmark starts at 5 seconds to compile and 5 seconds to run (running creates a SQLite database in memory, generates a few thousand entries, then does some selects etc. on them). 5 seconds to compile all the code is obviously a long time, and most of it is from a single function, yy_reduce. An outlining limit of 160,000 is enough to break that function up into 2 pieces but do nothing else, and the result is over 4x faster compilation (from over 5 seconds to just over 1 second) with a runtime slowdown of only 7%. Outlining more aggressively to 80,000 speeds up compilation by 6x, but the runtime slowdown increases to 35%. At the far right of the graph the overhead becomes very painful (runtime execution is over 4x slower), so the best compromise is probably somewhere in the middle to left of the graph.

Overall there is definitely a tradeoff here, but luckily even just breaking up the very largest function can be very helpful. Polynomial compilation times can be disproportionately influenced by the very largest function, so that makes sense.

What is the additional overhead that affects runtime as we go to the right on the graph? To understand what is going on, let's detail what the outlining optimization does. As already mention, the idea is to take code in the function and move it outside. To do that, it performs three stacked optimizations:

1. Aggressive variable elimination. Emscripten normally works hard to eliminate unneeded variables, for example

var x = y*2;

would become


But there is a potential tradeoff here. While in this example we replace a variable with a small expression, if the expression is large and shows up multiple times, it can increase code size while reducing variable count. However, for outlining, we do want to remove variables at (almost) all costs, because the more local variables we have, the more variables might happen to be shared between code staying in the function and code being moved out. That means we need to ship those variables between the two functions, basically by spilling them to the stack and then reading them from there, which adds overhead (see later for more details). The first outliner pass therefore finds practically all variables that are safe to remove, and removes them, even if code size increases.

2. Code flattening. It is straightforward to split up code that is in a straight line,

if (a) f(a);
if (b) f(b);
if (c) f(c);
if (d) f(d);

Here we can pick any line at which to split. But things are harder if the code is nested

if (a) {
} else {
  if (b) {
  } else {
    if (c) {
    } else {
      if (d) f(d);

The second pass "flattens" code by breaking up if-else chains, for example in this case it would generate something like

var m = 1;
if (a) {
  m = 0;
if (m && b) { 
  m = 0;
if (m && c) {
  m = 0;
if (m && d) {

The ifs are no longer nested, and form a straightline list of statements which is easier to process. However, we have added overhead here, both in terms of code size and in additional work that is done.

3. Outline code. Now that we minimized the number of local variables and made the code sufficiently flat to be easy to process, we recursively search the AST for straightline lists of statements where we can outline a particular range of them. When we find a suitable one (for details of the search, see the code), we outline it: create a new function, move the code into it, and add spill/unspill code on both sides to pass over the local variables (only the ones that are necessary, of course). A further issue to handle is to "forward" control flow changes, for example if we outlined a break then we basically forward that to the calling function and so forth. Then in the original function the code has been replaced with something like this:

HEAP[x] = 0; // clear control var
HEAP[x+4] = a; // spill 2 locals
HEAP[x+8] = b;
outlinedCode(); // do the call
b = HEAP[x+8];  // read 2 locals
c = HEAP[x+12];
if (HEAP[x] == 1) {
  continue; // forwarded continue

We spill some variables and read back some afterwards (note that they are not necessarily the same ones - here the outlined code does not modify a, so no need to read it). We also use a location in HEAP as a control flow helper variable, to tell us if the outlined code wants us to do, in this case, a continue.

Overall, while there is additional size and overhead here, with this approach we can replace a very large amount of code with just a function call to it plus some bookkeeping. As the graph above shows, in some benchmarks this can decrease compilation time very significantly.

To try this optimization in an emscripten-using project, just add   -s OUTLINING_LIMIT=N   to your compilation flags (during conversion of bitcode to JS), where N is a number.

Thursday, June 20, 2013

What asm.js is and what asm.js isn't

This is going to be a bit long, so tl;dr asm.js is a formalization of a pattern of JavaScript that has been developing in a natural way over years. It isn't a new VM or a new JIT or anything like that.

asm.js is a subset of JavaScript, defined with the goal of being easily optimizable and used primarily as a compiler target from languages like C and C++. I've seen some recent online discussions where people appear to misunderstand what those things mean, which motivated me to write this post, where I'll give my perspective on asm.js together with some context and history.

The first thing worth mentioning here is that compiling into JavaScript is nothing new. It's been done since at least 2006 with Google Web Toolkit (GWT) which can compile Java into JavaScript (GWT also does a lot more, it's a complete toolkit for writing complex clientside apps, but I'll focus on the compiler part of it). Many other compilers from various languages to JavaScript have shown up since then, for both existing languages like C++ and C#, to new languages like CoffeeScript, TypeScript and Dart.

Compiled code (that is, JavaScript that is the output of a compiler) can look odd. It's often not in a form that we would be likely to write by hand. And each compiler has a particular pattern or style of code that it emits: For example, a compiler can translate classes in one language into JavaScript classes using prototypal inheritance (which might look a little more like "typical" JS), or it can implement those classes with function calls without inheritance (passing "this" manually; this might look a little less "typical"), etc. Each compiler has a way in which it works, and that gives it a particular pattern of output.

Different patterns of JavaScript can run at different speeds in different engines. This is obvious of course. On one specific benchmark a JS engine can be faster than another, but in general no JS engine is "the fastest", because different benchmarks can test different things - use of classes, garbage collection, integer math, etc. When comparing JS engines, one can be better on one of those aspects and slower on another, because optimizing for them can be quite separate. Look at the individual parts of benchmarks on AWFY for example (click "breakdown"), and you'll see that.

The same is true for compiled code: different JS engines can be faster or slower on particular patterns of compiled code. For example, it was recently noticed that Chrome is sometimes much faster than Firefox on GWT-generated code; as you can see in the bug there, recent work has narrowed the gap. There is nothing odd about that: one JS engine was faster than another on a particular pattern, and work was done to catch up. This is how things work.

Aside from Java, which GWT compiles, another important language is C++. There have been at least two compilers from C++ to JavaScript in existence over the last few years, Emscripten and Mandreel. They are separate projects and quite different in many ways - for example, Emscripten is open source and targets just JavaScript, while Mandreel is closed source and can target other things as well like Flash - but over a few years they converged on basically the same pattern of JavaScript for compiled C++ code. That pattern involves using a singleton typed array to represent memory, and bitwise operations (|0, etc.) to get values to behave like C++ integers (the JavaScript language only has doubles).

We might say that Emscripten and Mandreel "discovered" a useful pattern of JavaScript, where "discovered" means the same as when Crockford discovered JSON. Of course typed arrays, bitwise ops and JavaScript Object Notation syntax all existed earlier, but noticing particular ways in which they can be especially useful is significant. Today it feels very natural to use JSON as a data interchange format - it almost feels like it was originally designed to be used in that manner, even though of course it was not - and likewise, if you are writing a compiler from C++ to JavaScript, it is natural to use |0 and a singleton typed array to do so.

Luckily for the Emscripten/Mandreel pattern, that type of code benefits from many years of work that went into JavaScript engines. For example, they are all very good at optimizing code whose types do not change, and the Emscripten/Mandreel pattern generates code that is implicitly statically typed, since it originated as statically typed C++, so in fact types should not change. Likewise, the bitwise operators that are important in the Emscripten/Mandreel pattern appeared in crypto tests in major benchmarks like SunSpider, so they are already well-optimized for.

In other words, the Emscripten/Mandreel pattern of code could be quite fast due to it focusing on things that JS engines already did well. This isn't a coincidence of course, as both Emscripten and Mandreel tested in browsers and decided to generate code that ran as quickly as possible. Then, as the output of these compilers was increasingly used on the web, browsers started to optimize for them in more specific ways. Google added a benchmark of Mandreel code to the Octane benchmark (the successor to the popular V8 benchmark), and both Chrome and Firefox have optimized for both Mandreel and Emscripten for some time now (as a quick search in their bug trackers can show). So the Emscripten/Mandreel pattern became yet another pattern of JavaScript that JavaScript engines optimize for, alongside all the others which are represented in the familiar benchmarks: SunSpider, Octane, Kraken, etc. This was all a very natural process.

Here is where we get to asm.js. While Emscripten/Mandreel code can run quickly, there was still some gap between it and native code. The gap was not huge - in many cases it ran just 3x slower than native code, which is pretty close to, say, Java - but still problematic in some cases (for example, high-performances games). So Mozilla began a research project to investigate ways to narrow that gap. Developed in the open, asm.js started to formally define the Emscripten/Mandreel pattern as a type system. Formalizing it made us think about all the corner cases, and we found various places in Emscripten/Mandreel code where types could change at runtime, which is contrary to the goal of the pattern, and can make things slower. asm.js's goal was to get rid of all those pitfalls.

The first set of benchmark results was promising - execution close to 2x slower than native, even on larger benchmarks. Achieving that speedup in Firefox took just 1 engineer only 3 months, since it doesn't require a new VM or a new JIT, it only required adding some additional optimizations to Firefox's existing JS engine. Soon after, Google showed very large speedups on asm.js code as well, as noted in the IO keynote and as you can see on AWFY. (In fact, over the last few days there have been several speedups noticeable there on both browsers - these improvements are happening as we speak!) Again, these large speedups in two browsers were achieved quickly because no new VM or JIT was written, just additional optimizations to existing JS VMs. So it would be incorrect to say that asm.js is a "new VM".

I've also seen people say that asm.js is a new "web technology". Loosely speaking, I suppose we could use that term, but only if we understand that if we say "asm.js is a web technology" then that is totally different from when we say "WebGL is a web technology". Something like WebGL is in fact what we normally call a "web technology" - it was standardized, and browsers need to decide to support it and then do work to implement it. asm.js, on the other hand, is a certain pattern of compiled code in JavaScript, which is already standardized and supported. asm.js, just like GWT's output pattern, does not need to be standardized, nor do browsers need to "support" it, nor to standardize whatever optimizations they perform for it. Browsers already support JavaScript and compete on JavaScript speed, so asm.js, GWT output, CoffeeScript output, etc., all work in them and are optimized to varying degrees.

Regarding the term "support": As before, I suppose we can loosely talk about browsers "supporting" asm.js, but we need to be sure what we mean. On the one hand, when we talk about WebGL then it is clear what we mean when we say something like "IE10 does not support WebGL". But if we say "a browser supports asm.js", we intend something very different - I guess people using that phrase mean something like "a browser that spent some time to optimize for asm.js". But I don't see people saying "a browser supports GWT output", so I suspect that using the term "support" comes from the mistaken notion that asm.js is something more than a pattern of JavaScript, which is not the case.

Why, then, do some people apparently still think asm.js is anything more than a pattern of JavaScript? I can't be sure, but here are some possibilities and clarifications to them:

1. asm.js code basically defines a low-level VM, in a sense: There is a singleton array for "memory" and all the operations are on that. While true, it is also true for Emscripten and Mandreel output, and other compilers as well. So in some sense this is valid to say, but just in the general sense that we can implement VMs in JavaScript (or any Turing-complete language), which of course we can regardless of asm.js.

2. Some of the initial speedups from asm.js were surprisingly large, and it felt like a big jump from the current speed of JavaScript, not an incremental step like things normally progress. And big jumps often require something "new", something large and standalone. But as I mentioned above, this was actually a very gradual process. Relevant optimizations for Emscripten/Mandreel code have taken years, writing SSA-optimizing JITs like CrankShaft and IonMonkey have likewise taken years (there is overlap between these two statements, of course), and the speedups on asm.js code are due to those long-term projects being able to really shine on a code pattern that is easy for them to optimize. In fact, many of the recent optimizations to speed up asm.js code do not actually add new optimizations directly, instead they change whether the browser decides to fully optimize it - that is, they get the code to actually reach the optimizing JIT (CrankShaft, IonMonkey, etc.). The power of those optimizing JITs has been there for a while now, but sometimes heuristics prevented it from being used.

3. A related thing is that I sometimes see people say that asm.js code will be "unusably slow" on browsers that do not optimize for it in a special way (just today I saw such a comment on hacker news, for example). First of all this is just false: look at these benchmarks, it is clear that on many of them Firefox and Chrome have essentially the same performance, and where there is a difference, it is noticeably decreasing. But aside from the fact that it is false, it is also disrespectful to the JavaScript VM engineers at Google, Microsoft and Apple: To say that asm.js code will be "unusably slow" on browsers other than Firefox implies that those other VM devs can't match a level of performance that was shown to be possible by their peers. That is a ridiculous thing to say given how talented those devs are, which has been proven countless times.

If someone says it will be "unusably slow" not because they can't reach the same speed but because they won't, then that looks obviously false. Why would any browser decide to not optimize for a type of JavaScript that is being used - including in high-profile things like Epic Citadel - and has relevant benchmarks? All browsers want to be fast on everything, this is a very competitive field, and we have already seen Google optimize for asm.js code as mentioned earlier in the IO keynote; there are also some positive signs from Microsoft as well.

4. Another possible reason is that the asm.js optimization module in Firefox is called OdinMonkey (mentioned for example in this blogpost). SpiderMonkey, Firefox's JavaScript engine, has called its JITs with Monkey-related names: TraceMonkey, JaegerMonkey and IonMonkey (although, baseline is a new JIT that received no Monkey name, so there is definitely no consistency here). So perhaps OdinMonkey sounds like it could be a new JIT, which seems to imply that asm.js optimizations require a new JIT. But as mentioned in that blogpost, OdinMonkey is not a new JIT, instead it is a module that sits alongside the rest of the parts of the JavaScript engine. What OdinMonkey does is detect the presence of asm.js code, take the parse tree that the normal parser emitted, type check the parse tree, and then send that information into the existing IonMonkey optimizing compiler. All the code optimization and code generation of IonMonkey are still being used, no new JIT for asm.js was written. That's why, as mentioned before, writing OdinMonkey only took one engineer 3 month's work (while also working on the spec). Writing a new JIT would take much more time and effort!

5. Also possibly related is the fact that OdinMonkey uses the "use asm" hint to decide to type check the code. This is indeed a bit odd, and feels wrong to some people. I certainly understand that feeling, in fact when working on the design of asm.js I argued against it. The alternative would be to do some heuristic check: Does this block of code not contain anything impossible in asm.js (no throw statements, for example), and does it contain a singleton typed array, etc.? If so, then start to type check in OdinMonkey. This could achieve practically the same result in my opinion. It does, however, have some downsides - heuristics are sometimes wrong and often add overhead, and really this optimization won't "luckily work" on random code, instead it will be expected and intended by a person using a C++ to JS compiler, so why not have them note that intention - which are strong arguments for using an explicit hint.

The important thing is that "use asm" does not affect JS semantics (if it does in some case - and new optimizations often do cause bugs in odd corner cases - then that must be fixed just like any other correctness bug). That means that JavaScript engines can ignore it, and if it is ultimately determined to be useless by JS engines then we can just stop emitting it.

6. asm.js has a spec, and defines a type system. That seems much more "formal" than, say, the output pattern of GWT, and furthermore things that do have specs are generally things like IndexedDB and WebGL, that is, web technologies that need to be standardized and so forth. I think this implies to some people that asm.js is more like WebGL than just a pattern of JavaScript. But not everything with a spec has it for standardization purposes. As mentioned before, one reason for writing a spec and type system for asm.js was to really force us to think about every detail, in order to get rid of all the places where types can change, and other stuff that prevents optimizations. There are 4 other reasons that IMO justified the effort to write a spec:

* Having a spec makes it easy to communicate things to other people. As I mentioned before, GWT output often used to run (and maybe still does) faster in Chrome than Firefox, and I am not really sure why (probably multiple reasons). GWT and Chrome are two projects from the same company, so it would not be surprising if their devs talk to each other privately, and there would be nothing wrong with it if they did. Emscripten and Firefox are, like GWT and Chromium, two open source projects primarily developed by a single browser vendor, so there is sort of a parallel situation here. To avoid the downsides of private discussions, we felt that writing a spec for asm.js would give us the benefit of being able to say "this (with all the technical details) is why this code runs fast in Firefox." That means that if other browser vendors want to run that code quickly, then they have all the docs they need in order to do so, and on the other side of things, if other compilers like Mandreel want to benefit from those speedups, then once more they have all the information they need as well. Writing a spec (and doing so in the open) makes things public and transparent.

* Having a type system opens up the possibility to do ahead of time (AOT) compilation in a reasonable way. Note that AOT should have already been possible in the output patterns for Emscripten and Mandreel (remember, they are generated from C++, that is AOTed), but actually wasn't because of the few places where types could in fact change. Assuming we did things properly, asm.js should have no such possible type changes anymore, so AOT is possible. And a type system makes it not just possible but quite reasonable to actually do so in practice.

AOT can be very useful in reducing the overhead and risk of optimization heuristics. As mentioned before, in some cases code never even reaches the optimizing JIT due to heuristics not making the optimal decision, and furthermore, collecting the data for those heuristics (how long a function or loop executes, type info for variables, etc.) adds overhead (during a warmup phase before full optimization, and possibly later during deoptimization and recompilation). If code can be compiled in an AOT manner, we simply avoid those two problems: we optimize everything, and we do so immediately. (There is a potential downside, however, in that fully optimizing large amounts of code can take significant time; in Firefox this is partially mitigated by things like compilation using multiple cores, another possibility is to consider falling back to the baseline JIT for particularly slow-to-compile functions.)

Because of these benefits of AOT compilation, it can avoid the "stutter" problem - where a game pauses or slows down briefly when new code is executed (like when a new level in the game is begun that uses some new mechanics or effects), due to some new code being detected as hot and then optimized, at the precise time when it is needed to be fast. Stutter can be quite problematic, as it tends to happen at the worst possible times - when something interesting happens - and I've often heard game devs be concerned about it. AOT compilation fully optimizes all the code ahead of time, so the stutter problem is avoided.

A further benefit of AOT is that it gives predictable performance: we know that all the code will be fully optimized, so we can be reasonably confident of performance in a new benchmark we have never tested on, since we do not rely on heuristics to decide what to optimize and when. This has been shown repeatedly when AOT compilation in Firefox achieved good performance the very first time we ran it on a new codebase or benchmark, for example on Epic Citadel, a compiled Lua VM, and others. In my JavaScript benchmarking experiences in the past, that has been rare.

AOT therefore brings several benefits. But with all that said, it is an implementation detail, one possible approach among others. As JS engines continue to investigate ways to run this type of code even better, we will likely see more experimentation in this area.

* Last reason for writing the spec and type system: asm.js began as a research project, and we might want to publish a paper about it at some point. (This one is much less important than the other three, but since I wrote out the others, might as well be complete.)

That sums up point 6, why asm.js has a spec and type system. Finally, two last possible reasons why people might mistakenly think asm.js is a new VM or something like that:

7. Math.imul. Math.imul is something that came up during the design of asm.js, that did actually turn into a proposal for standardization (for ES6), and has been implemented in at least Firefox and Chrome so far. Some people seem to think that asm.js relies on Math.imul, which would imply that asm.js relies on new language features in JavaScript. While Math.imul is helpful, it is entirely optional. It makes only a tiny impact on benchmarks, and is super-easy to polyfill (which emscripten does). Perhaps it would have been simpler to not propose it and just use the polyfill code, to avoid any possible confusion. But Math.imul is so simple to define (multiply two 32-bit numbers properly, return the lower 32 bits), and is basically the only integer math operation that cannot be implemented in a natural way in JS (the remarkable thing is that all the others do have a natural way to be expressed, even though JavaScript does not have integer types!), so it just felt like a shame not to.

It's important to stress that if Math.imul were opposed by the JavaScript community, it would of course not have been used by asm.js - asm.js is a subset of JavaScript, it can't use anything nonstandard. It goes without saying though that new things are being discussed for standardization in JavaScript all the time, and if there are things that could be useful for compiled C++ code, then such things are worth discussing in the JavaScript community and standards bodies.

8. Finally, asm.js and Google's portable native client (PNaCl) share the goal of enabling C++ code to run safely on the web at near-native speeds. So it is perhaps not surprising that some people have written blogposts comparing them. And aside from that shared goal, there are some other similarities, like both utilizing LLVM in some manner. But to compare them as if they were two competing products - that is, as if they directly compete with each other, and one's success may be at the other's detriment - is, I think, irrelevant. PNaCl (and PPAPI, the pepper plugin API on which it depends) is a proposed web technology, something that needs to be standardized with support from multiple vendors if it is to be part of the web, and asm.js on the other hand is just a pattern of JavaScript, something that is already supported in all modern browsers. asm.js code is being optimized for right now, just like many patterns of JavaScript are, driven by the browser speed race. So asm.js will continue to run faster over time, regardless of what happens with other things like PNaCl: I don't see a realistic scenario where PNaCl somehow makes browser vendors decide to stop competing on JavaScript speed (other plugin technologies like Flash, Java, Silverlight, Unity, etc. did not).

All of this has nothing to do with how good PNaCl is. (Worth mentioning here that I consider it to be an impressive feat of engineering, and I have a huge amount of respect for the engineers working on it.) It is simply operating in a different area. The JavaScript speed race is very important right now in the browser market: When your browser runs something slower than another browser, that's something that you are naturally motivated to improve. This happens with benchmarks (we already saw big speedups on two browsers on asm.js code), and it happens with things like the Epic Citadel demo (it initially did not run at all in Chrome, but Google quickly fixed things). This kind of stuff will continue to happen on the web and drive asm.js performance, and as already mentioned, this is regardless of what happens with PNaCl.

Are things really that simple?

I've been saying that asm.js is just a pattern of JavaScript and nothing more, and the speedups on it are part of the normal JavaScript speed competition that leads browser vendors to optimize various patterns. But actually things are more complicated than that. I would not argue that any pattern should be optimized for, nor that it would be ok for a vendor to specifically optimize based on hints for an arbitrary subset of JavaScript.

To make my point, consider the following extreme hypothetical example: A browser vendor decides to optimize a little subset of JS that has a JS array for memory, and stores either integers or strings into it. For example,

function WTF(print) {
  'use WTF';
  var mem = [];
  function run(arg) {
    mem[20] = arg;
    var a = 100;
    mem[5] = a;
    a = 'hello';
    mem[11] = a;
    mem[5] = mem[11];
    a = arg;
    return a;
  return run;

Call this WTF.js. So when 'use WTF' is present, the browser checks that there is a singleton array called mem and so forth, and can then optimize the WTF.js code very well: mem is in a closure and does not escape, so we know its identity statically; we also know it should be implemented as a hash table because it likely has holes in its indexes; we know it can contain only ints or strings so some custom super-specialized data structure might be useful here; etc. etc. Again, this is an example meant to be extreme and ridiculous, but you can imagine that such a pattern might be useful as a compilation target for some odd language.

What is wrong with WTF.js is that while it is JavaScript, it is a terrible kind of JavaScript, in the following ways. In particular it violates basic principles of how current JavaScript engines optimize: for starters, variables are given more than one type (both locals and elements in mem). This is exactly what JS engine devs have been telling us all not to do for years now. Also, this subset comes out of nowhere - I can't think of anything like it, and when I tried to give it a hypothetical justification in the previous paragraph, I had to really reach. And there are obvious ways to make this more reasonable, for example to use consecutive indexes from 0 in the "mem" array, so there are no holes - this is something else that JS engine devs have been telling us to do for a very long time - and so forth. So WTF.js is in fact WTF-worthy.

None of that is the case with asm.js. As I detailed before, asm.js is the latest development in a natural process that has been going on in the JavaScript world for several years: Multiple compilers from C++ to JS appeared spontaneously, later they converged on the subset of JS that they target, and later JS engines optimized for that subset as it became more common on the web. While |0 might look odd to you, it wasn't invented for asm.js. It was "discovered" independently multiple times way before asm.js, found to be useful, and JS engines optimized for it. This process was not directed by anyone; it happened organically.

These principles guided us while designing early versions of asm.js: There were code patterns that were even easier to optimize, but they were novel, sometimes bizarre, and most importantly, they ran poorly without special optimizations for them, that did not exist yet. When I felt that a proposed pattern was of that nature, I strongly opposed it. Emscripten and Mandreel code already ran quite well; to design a new pattern that could be much faster with special optimizations, but right now is significantly slower without them, is a bad idea - it would feel unfair and WTF-like.

Therefore as we designed asm.js we tested on JS engines without any special optimizations for it, primarily on Chrome and on Firefox with the WIP optimizations turned off. asm.js is one output mode in Emscripten, so we had a good basis for comparison: when we flip the switch and go from the old Emscripten output pattern to asm.js, do things get faster or slower? (Note btw that we lack something similar in the WTF.js example from before, which is another problem with WTF.js.) What we saw was that (when we avoided the more novel ideas I mentioned before, that were rejected), flipping the switch from the old pattern to asm.js generally had little effect, sometimes helping and sometimes hurting, but overall things stayed at much the same (quite good) level of performance JS engines already had. That made sense, because asm.js is very close to the old pattern. (My guess is that the cases where it helped were ones where asm.js's extra-careful avoidance of types changing at runtime made a positive difference, and cases where it hurt were ones where it happened to hit an unlucky case in the heuristics being used by the JS engine.)

A few months ago Emscripten's default code generation mode switched to asm.js. Like most significant changes it was discussed on the mailing list and irc, and I was happy to see that we did not get reports of slowdowns in other browsers as a result of the change, which is further evidence that we got that part right. In fact (aside from the usual miscellaneous bugs following any significant change in a large project), the main regression caused by that change was that we got reports of startup slowdowns in Firefox actually! (These were caused by AOT sometimes being sluggish, it was however very easy to get large speedups on it, following those bug reports.)

Developing and shipping optimizations for something bizarre like WTF.js would be selfish, since it benefits one vendor while arbitrarily, unexpectedly and needlessly harming perf in other vendor's browsers. But none of that is the case with asm.js, which builds on a naturally-occurring pattern of JavaScript (the Emscripten/Mandreel pattern, already optimized for by JS engines), was designed to not harm performance on other browsers compared to that already-existing pattern, and as benchmarks have shown on two browsers already, creates opportunities for significant speedups on the web.