Imagine we have a file code.c with contents
#include<stdio.h>
int double_it(int x) {
return x+x;
}
int main() {Compiling it with emcc code.c , we can run it using node a.out.js and we get the expected output of hello, world! So far so good, now lets look in the code.
printf("hello, world!\n");
}
The first thing you might notice is the size of the file: it's pretty big! Looking inside, the reasons become obvious:
- It contains comments. Those would be stripped out in an optimized build.
- It contains runtime support code, for example it manages function pointers between C and JS, can convert between JS and C strings, provides utilities like ccall to call from JS to C, etc. An optimized build can reduce those, especially if the closure compiler is used (--closure 1): when enabled, it will remove code not actually called, so if you didn't call some runtime support function, it'll be stripped.
- It contains large parts of libc! Unlike a "normal" environment, our compiler's output can't just expect to be linked to libc as it loads. We have to provide everything we need that is not an existing web API. That means we need to provide a basic filesystem, printf/scanf/etc. ourselves. That accounts for most of the size, in fact. Closure compiler helps with the part of this that is written in normal JS, for the part that is compiled from C, it gets stripped by LLVM and is minifed by our asm.js minifier in optimized builds.
(Side note: We could probably optimize this quite a bit more. It's been lower priority I guess because the big users of Emscripten have been things like game engines, where both the code and especially the art assets are far larger anyhow.)
Ok, getting back to the naive unoptimized build - let's look for our code, the functions double_it() and main(). Searching for main leads us to
function _main() {This seems like quite a lot for just printing hello world! It's because this is unoptimized code. So let's look at an optimized build. We need to be careful, though - the optimizer will minify the code to compress it, and that makes it unreadable. So let's build with -O2 -profiling, which optimizes in all the ways that do not interfere with inspecting the code (to profile JS, it is very helpful to read it, hence that option keeps it readable but still otherwise optimized; see emcc --help for the -g1, -g2 etc. options which do related things at different levels). Looking at that code, we see
var $vararg_buffer = 0, label = 0, sp = 0;
sp = STACKTOP;
STACKTOP = STACKTOP + 16|0;
$vararg_buffer = sp;
(_printf((8|0),($vararg_buffer|0))|0);
STACKTOP = sp;return 0;
}
function _main() {There is some stack handling overhead, but now it's clear that all it's doing is calling puts(). Wait, why is it calling puts() and not printf() like we asked? The LLVM optimizer does that, as puts() is faster than printf() on the input we provide (there are no variadic arguments to printf here, so puts is sufficient).
var i1 = 0;
i1 = STACKTOP;
_puts(8) | 0;
STACKTOP = i1;
return 0;
}
Keeping Code Alive
What about the second function, double_it()? There seems to be no sign of it. The reason is that LLVM's dead code elimination got rid of it - it isn't being used by main(), which LLVM assumes is the only entry point to the entire program! Getting rid of unused code is very useful in general, but here we actually want to look at code that is dead. We can disable dead code elimination by building with -s LINKABLE=1 (a "linkable" program is one we might link with something else, so we assume we can't remove functions even if they aren't currently being used). We can then find
function _double_it(i1) {(Note btw the "_" that prefixes all compiled functions. This is a convention in Emscripten output.) Ok, this is our double_it() function from before, in asm.js notation: we coerce the input to an integer (using |0), then we multiply it by two and return it.
i1 = i1 | 0;
return i1 << 1 | 0;
}
We can keep code alive by calling it, as well. But if we called it from main, it might get inlined. So disabling dead code elimination is simplest. You can also do this in the C/C++ code, using the C macro EMSCRIPTEN_KEEPALIVE on the function (so, something like int EMSCRIPTEN_KEEPALIVE double_it(int x) { ).
C++ Name Mangling
Note btw that if our file had the suffix cpp instead of c, things would have been less fun. In C++ files, names are mangled, which would cause us to see
function __Z9double_iti(i1) {You can still search for the function name and find it, but name mangling adds some prefixes and postfixes.
asm.js Stuff
Once we can find our code, it's easy to keep poking around. For example, main() calls puts() - how is that implemented? Searching for _puts (again, remember the prefix _) shows that it is accessed from
var asm = (function(global, env, buffer) {All asm.js code is enclosed in a function (this makes it easier to optimize - it does not depend on variables from outside scopes, which could change). puts(), it turns out, is written not in asm.js, but in normal JS, and we pass it into the asm.js block so it is accessible - by simply storing it in a local variable also called _puts. Looking further up in the code, we can find where puts() is implemented in normal JS. As background, Emscripten allows you to implement C library APIs either in C code (which is compiled) or normal JS code, which is processed a little and then just included in the code. The latter are called "JS libraries" and puts() is an example of one.
'use asm';
// ..
var _puts=env._puts;
// ..
// ..main(), which uses _puts..
// ..
})(.., { .. "_puts": _puts .. }, buffer);
Conclusion
You don't need to read the code that is output by any of the compilers you use, including Emscripten - compilers emit code meant to be executed, not understood. But still, sometimes it can be interesting to read it. And it's easier to do with a compiler that emits JavaScript, because even if it isn't typical hand-written JavaScript, it is still in a fairly human-readable format.