Tuesday, May 14, 2013

The Elusive Universal Web Bytecode

It's often said that the web needs a bytecode. For example, the very first comment in a very recent article on video codecs on the web says
A proper standardized bytecode for browsers would (most likely) allow developers a broader range of languages to choose from as well as hiding the source code from the browser/viewer (if that's good or not is subjective of course).
 And other comments continue with
Just to throw a random idea out there: LLVM bytecode. That infrastructure already exists, and you get to use the ton of languages that already have a frontend for it (and more in the future, I'm sure).
[..]
I also despise javascript as a language and wish someone would hurry up replacing it with a bytecode so we can use decent languages again.
[..]
Put a proper bytecode engine in the browser instead, and those people that love javascript for some unknowable reason could still use it, and the rest of us that use serious languages could use them too.
[..]
Honestly, .Net/Mono would probably be the best bet. It's mature, there are tons of languages targeting it, and it runs pretty much everywhere already as fast as native code
Ignoring the nonproductive JS-hating comments, basically the point is that people want to use various languages on the web, and they want those languages to run fast. Bytecode VMs have been very popular since Java in the 90's, and they show that multiple languages can run in a single VM while maintaining good performance, so asking for a bytecode for the web seems to make sense at first glance.

But already in the quotes above we see the first problem: Some people want one bytecode, others want another, for various reasons. Some people just like the languages on one VM more than another. Some bytecode VMs are proprietary or patented or tightly controlled by a single corporation, and some people don't like some of those things. So we don't actually have a candidate for a single universal bytecode for the web. What we have is a hope for an ideal bytecode - and multiple potential candidates.

Perhaps though not all of the candidates are relevant? We need to pin down the criteria for determining what is a "web bytecode". The requirements as mentioned by those requesting it include
  • Support all the languages
  • Run code at high speed
To those we can add two additional requirements that are not mentioned in the above quotations, but are often heard:
  • Be a convenient compiler target
  • Have a compact format for transfer
In addition we must add the requirements that anything that runs on the web must fulfill,
  • Be standardized
  • Be platform-independent
  • Be secure
JavaScript can already do the last three (it's already on the web, so it has to). Can it do the first four? I would say yes:
  • Support all the languages: A huge list of languages can compile into JavaScript, and that includes major ones like C, C++, Java, C#, LLVM bytecode, and so forth. There are some rough edges - often porting an app requires changing some amount of code - but nothing that can't be improved on with more work, if the relevant communities focus on it. C++ compilers into JavaScript like Emscripten and Mandreel have years of work put into them and are fairly mature (for example see the Emscripten list of demos). GWT (for Java) has likewise been used in production for many years; the situation for C# is perhaps not quite as good, but improving, and even things like Qt can be compiled into JavaScript. For C#, Qt, etc., it really just depends on the relevant community being focused on the web as one of its targets: We know how to do this stuff, and we know it can work.
  • Run code at high speed: It turns out that C++ compiled to JavaScript can run at about half the speed of native code, which in some cases outperforms Java, and is expected to get better still. Those numbers are when using the asm.js subset of JavaScript, which basically structures the compiler output into something that is easier for a JS engine to optimize. It's still JavaScript, so it runs everywhere and has full backwards compatibility, but it can be run at near-native speed already today.
  • Be a convenient compiler target: First of all, the long list of languages from before shows that many people have successfully targeted JavaScript. That's the best proof that JavaScript is a practical compiler target. Also, there are many languages that compile into either C or LLVM bytecode, and we have more than one compiler capable of compiling those to the web, and one of them is open source, so all those languages have an easy path. Finally, while compiling into a "high-level" language like JavaScript is quite convenient, there are downsides, in particular the lack of support for low-level control flow primitives like goto; however, this is addressed by reusable open source libraries like the Relooper.
  • Have a compact format for transfer: It seems intuitive that a high-level language like JavaScript cannot be compact - it's human-readable, after all. But it turns out though that JS as a compiler target is already quite small, in fact comparable to native code when both are gzipped. Also, even in the largest and most challenging examples, like Unreal Engine 3, the time spent to parse JS into an AST does not need to be high. For example, in that demo it takes just 10 seconds on my machine to both parse and fully optimize the output of over 1 million lines of C++ (remember that much of that optimization time would need to be done no matter what format the code is in, because it has to be a portable one).
So arguably JavaScript is already very close to providing what a bytecode VM is supposed to offer, as listed in the 7 requirements above. And of course this is not the first time that has been said, see here for a previous discussion from November 2010. In the 2.5 years since that link, the case for that approach has gotten significantly stronger, for example, JavaScript's performance on compiled code has improved substantially, and compilers to JavaScript can compile very large C++ applications like Unreal Engine 3, both as mentioned before. At this point the main missing pieces are, first (as already mentioned) improving language support for ones not yet fully mature, and second, a few platform limitations that affect performance, notably lack of SIMD and threads with shared state.

Can JavaScript fill the gaps of SIMD and mutable-memory threads? Time will tell, and I think these things would take significant effort, but I believe it is clear that to standardize them would be orders of magnitude simpler and more realistic than to standardize a completely new bytecode. So a bytecode has no advantage there.

Some of the motivation for a new bytecode appears to come from an elegance standpoint: "JavaScript is hackish", "asm.js is a hack", and so forth, but a new from-scratch bytecode would be (presumably) a thing of perfection. That's an understandable sentiment, but technology has plenty of such things, witness the persistence of x86, C++, and so forth (some would add imperative programming to that list). It's not only true of technology but human civilization as well, for example no natural language has the elegance of Esperanto, and our currently-standing legal and political systems are far from what a from-scratch redesign would arrive at. But large long-standing institutions are easier to improve continuously rather than to completely replace. I think it's not surprising that that's true for the web as well.

(Note that I'm not saying we shouldn't try. We should. But we shouldn't stop trying at the same time to also improve the current situation in a gradual way. My point is that the latter is more likely to succeed.)

Elegance aside, could a from-scratch VM be better than JavaScript? In some ways of course it could, like any redesign from scratch of anything. But I'm not sure that it could fundamentally be better in substantial ways. The main problem is that we just don't know how to create a perfect "one bytecode to rule them all" that is
  • Fast - runs all languages at their maximal speed
  • Portable - runs on all CPUs and OSes
  • Safe - sandboxable so it cannot be used to get control of users' machines
The elusive perfect universal bytecode would need to do all three, but it seems to me that we can only pick two.

Why is this so, when supposedly the CLR and JVM show that the trifecta is possible? The fact is that they do not, if you really take "fast" to mean what I wrote above, which is "runs all languages at their maximal speed" - that's what I mean by "perfect" in the context of the last paragraph. For example, you can run JavaScript on the JVM, but it won't come close to the speed of a modern JS VM. (There are examples of promising work like SPUR, but that was done before the leaps in JS performance that came with CrankShaft, TypeInference, IonMonkey, DFG, etc.).

The basic problem is that to run a dynamic language at full speed, you need to do the things that JavaScript engines, LuaJIT, etc. do, which includes self-modifying code (architecture-specific PICs), or even things like entire interpreters in handwritten optimized assembly. Making those things portable and safe is quite hard - when you make them portable and safe, you make them more generic pretty much by definition. But CPUs have significant-enough differences that doing generic things can lead to slower performance.

The problems don't stop there. A single "bytecode to rule them all" must make some decisions as to its basic types. LuaJIT and several JavaScript VMs represent numbers using a form of NaNboxing, which uses invalid values in doubles to store other types of values. Deciding to NaNbox (and in what way) or not NaNbox is typically a design desicion for an entire VM. NaNboxing might be well and good for JS and Lua, but it might slow down other languages. Another example is how strings are implemented: IronPython, Python on .NET, ran into issues with how Python expects strings to work as opposed to .NET.

Yet another area where decisions must be made is garbage collection. Different languages have different patterns of usage, both determined by the language itself and the culture around the language. For example, the new garbage collector planned for LuaJIT 3.0, a complete redesign from scratch, is not going to be a copying GC, but in other VMs there are copying GCs. Another concern is finalization: Some languages allow hooking into object destruction, either before or after the object is GC'd, while others disallow such things entirely. A design decision on that matter has implications for performance. So it is doubtful that a single GC could be truly optimal for all languages, in the sense of being "perfect" and letting everything run at maximal speed.

So any VM must make decisions and tradeoffs about fundamental features. There is no obvious optimal solution that is right for everything. If there were, all VMs would look the same, but they very much do not. Even relatively similar VMs like the JVM and CLR (which are similar for obvious historic reasons) have fundamental differences.

Perhaps a single VM could include all the possible basic types - both "normal" doubles and ints, and NaNboxed doubles? Both Pascal-type strings and C-type strings? Both asynchronous and synchronous APIs for everything? Of course all these things are possible, but they make things much more complicated. If you really want to squeeze every last ounce of performance out of your VM, you should keep it simple - that's what LuaJIT does, and very well. Trying to support all the things will lead to compromises, which goes against the goal of a VM that "runs all languages at their maximal speed".

(Of course there is one way to support all the things at maximal speed: Use a native platform as your VM. x86 can run Java, LuaJIT and JS all at maximal speed almost by definition. It can even be sandboxed in various ways. But it has lost the third property of being platform-independent.)

Could we perhaps just add another VM like the CLR alongside JavaScript, and get the best of both worlds that way, instead of putting everything we need in one VM? That sounds like an interesting idea at first, but it has technical difficulties and downsides, is complex, and would likely regress existing performance.

Do we actually need "maximal speed"? How about just "reasonable speed"? Definitely, we can't hold out for some perfect VM that can do it all. In the last few paragraphs I've been talking about a "perfect" bytecode VM that can run everything at maximal speed. My point is that it's important to realize that there is no such VM. But, with some compromise we definitely can have a VM that runs many things at very high speeds. Examples of such VMs are the JVM, CLR, and as mentioned before JavaScript VMs as well, since they can run one very popular dynamic language at maximal speed, and they can run statically typed code compiled from C++ about as well or even better than some bytecode VMs (with the already-discussed caveats of SIMD and shared-mutable threads).

For that reason, switching from JavaScript to another VM would not be a strictly better solution in all respects, but instead just shift us to another compromise. For example, JavaScript itself would be slower on the CLR, but C# would be faster, and I'm not sure which of the two can run C++ faster, but my bet is that both can run it at about the same speed.

So I don't think there is much to gain, technically speaking, from considering a new bytecode for the web. The only clear advantage such an approach could give is perhaps a more elegant solution, if we started from scratch and designed a new solution with less baggage. That's an appealing idea, and in general elegance often leads to better results, but as argued earlier there would likely be no significant technical advantages to elegance in this particular case - so it would be elegance for elegance's sake.

I purposefully said we don't need a new bytecode in the last paragraph. We already have JavaScript, which I have claimed is quite close to providing all the advantages that a bytecode VM could. Note that this wasn't entirely expected - not any language can in a straightforward way be transformed into a more general target for other languages. It just so happens though that JavaScript did have just enough low-level support (bitwise operations being 32-bit, for example) to make it a practical C/C++/LLVM IR compiler target, which made it worth investing in projects like the Relooper that work around some of its other limitations. Combined with the already ongoing speed competition among JavaScript engines, the result is that we now have JavaScript VMs that can run multiple languages at high speed.

In summary, we already have what practically amounts to a bytecode VM in our browsers. Work is not complete, though: While we can port many languages very well right now, support for other languages is not quite there yet. If you like a language that is not yet supported on the web, and you want it to run on the web, please contribute to the relevant open source project working on doing that (or start one if there isn't one already). There is no silver bullet here - no other bytecode VM that if only we decided on it, we would have all the languages and libraries we want on the web, "for free" - there is work that needs to be done. But in recent years we have made huge amounts of progress in this area, both in infrastructure for compiling code to JavaScript and in improvements to JavaScript VMs themselves. Let's work together to finish that for all languages.

28 comments:

  1. The blogpost is long enough as it is ;) But what specifically do you think should be addressed regarding PNaCl?

    ReplyDelete
  2. The problem I have with JS and asm.js in particular is that the arguments for making it JS could be made for any of the alternatives. But JS has particular disadvantages that are being ignored just because.

    - Asm.js is not intended for humans to read or write. So the fact that it is plain JavaScript offers no real benefit other than an illusion of human-readability. At the very least, you need a validator to ensure the rules are being followed correctly, lest you be kicked to classic JS mode.

    - Given the enormous speed difference between normal JS and asm.js, and the kinds of things asm.js is being used for, I cannot imagine anyone wanting to run asm.js code in an unsupported browser: it would simply be too slow to be useful. It's like trying to play a modern game on a 5 year old graphics card... possible in theory, unplayable in practice.

    - Javascript numbers and arithmetic are some of the worst parts of the language, and asm.js does not improve upon it, quite the opposite. The intish/doublish type hierarchy is convoluted, and artificially limits you to 32-bit ints and 64-bit doubles.

    - Proposals for incorporating more types and e.g. SIMD instructions naturally cannot do so at a language level, and must instead of use convoluted wrapper objects. Which means there are now real native types and faux-native types, and some types 'are more equal than others'.

    I can only conclude that asm.js is a massive fallacy, of virtue-by-association, which picks the compromise that nobody really wanted, but only seems good because it is currently the least offensive.

    I'm reminded of all the arguments of why XML was a great idea, and how none of those things ever materialized: XML was "human-friendly" but rarely written by hand, and the tools built upon it, such as XSLT, gained nothing from being written in XML, quite the opposite.

    Instead, JSON took over, by virtue of including only the parts people actually wanted, and mapping cleanly to the types and structures of numerous languages rather than just the one thing we had before (SGML/HTML).

    LLVM to me seems like the JSON of JITs. It would be an real intermediary format, not a hacked one, it's had many more years of research behind it than asm.js, and it's not reinventing the wheel just for the web's sake.

    Unfortunately it seems asm.js-mania has already struck, and just like JSON, we'll probably have to wait 10 years before everyone finally admits they dove in head-first without really considering the alternatives seriously.

    ReplyDelete
    Replies
    1. Don't dismiss the usefulness of asm.js being a subset of JS. For one thing, it means there is less work to specify and test asm.js than there would be for other bytecode formats. For another thing, asm.js code that runs well on a phone in an asm.js-supporting browser will probably also run well on a fast desktop in a browser that doesn't support asm.js. There are also use-cases involving porting of legacy code where raw performance is not a big issue.

      Proposals for incorporating more types are mostly focused on BinaryData and other new features in ES6, so these are real language features, not "fake".

      Furthermore, you haven't listed any real benefits for LLVM. "Not a hacked one" isn't a tangible benefit, and it's also not true, since LLVM bitcode was not designed to be portable and actually isn't. Which means anything LLVM-based, like PNaCl, has more work to do, apparently more work than asm.js based on comparing our asm.js efforts with Google's PNaCl effort.

      Delete
    2. > The problem I have with JS and asm.js in particular is that the arguments for making it JS could be made for any of the alternatives.

      As I said in the article, yes, JS as a multilanguage VM is comparable to other multilanguage VMs (JVM, CLR). The main benefit it has is that it is already standardized and present in all web browers. That is the one specific argument, that cannot be made for the alternatives.

      > Given the enormous speed difference between normal JS and asm.js, and the kinds of things asm.js is being used for, I cannot imagine anyone wanting to run asm.js code in an unsupported browser: it would simply be too slow to be useful

      This is simply not true. Look at

      http://arewefastyet.com/#machine=11&view=breakdown&suite=asmjs-ubench

      where you can see v8 doing very well in many cases despite not having special asm.js optimizations. As another example, try running Epic Citadel in Chrome (requires a special build currently due to a memory bug and a network bug) - despite not having special asm.js optimizations, it runs quite well.

      > LLVM to me seems like the JSON of JITs. It would be an real intermediary format, not a hacked one, it's had many more years of research behind it than asm.js, and it's not reinventing the wheel just for the web's sake.

      LLVM is not portable. You can see asm.js as a portable variant of LLVM IR, in fact emscripten compiles LLVM into asm.js - so there is a clear equivalence between the two.

      Delete
    3. We can expect the speed gap between desktop and mobile to decrease, so this isn't really a relevant argument for the future of the web. Neither is it an attractive argument that asm.js is excellent for doing something today in a browser that native excelled at 10+ years ago. And whether asm.js's BinaryData is derived from ES6 or not doesn't change them being square pegs for round holes, which map poorly to other languages and are cumbersome to work with in ES6 itself.

      I say this as someone who enjoys JavaScript quite a lot by the way, and knows where highly-optimizable languages like asm.js can go.

      The argument that asm.js is here today and PNaCl isn't strikes me as a classic open source "talk is silver, code is gold" argument and in line with what I've come to expect of Mozilla over the past 15 years. I'm not trying to troll, I've just seen this in open source communities over and over again: there is no room for truly big projects, so instead the only thing that gets done is that which can be done in incremental steps.

      LLVM is a technology with proven potential. In fact, it's the LLVM-driven emscripten that makes asm.js viable in the first place, is it not? If that's not a strong sign that it's the LLVM technology and not the asm.js subset where the magic is, then I don't know what is. I admit I haven't worked with LLVM much directly, but it strikes me as exactly what asm.js pretends to be: a high-level assembly-like language.

      Having LLVM infrastructure in the browser also has interesting implications for WebGL and GLSL. Indeed as far as I understand, Apple used LLVM as the JIT in CoreGraphics for on-demand hardware acceleration, which worked so well nobody really noticed. That's where the web is going if you really look forward, instead of trying to make demos from 1999 run well...

      Delete
    4. "so there is a clear equivalence between the two."

      There is an obvious equivalence between any two Turing complete languages. That doesn't mean that that equivalence is elegant. Look at what the demoscene is doing today, rather than in 2006, and ask yourself if asm.js will get us closer to having that run in a browser any time soon...

      Delete
    5. > LLVM is a technology with proven potential. In fact, it's the LLVM-driven emscripten that makes asm.js viable in the first place, is it not? If that's not a strong sign that it's the LLVM technology and not the asm.js subset where the magic is, then I don't know what is.

      Who said otherwise? Of course the "magic" is LLVM + the JS VM's backend optimizers (IonMonkey in Firefox, CrankShaft in Chrome, etc.).

      Emscripten compiles LLVM into asm.js and optimizes it, and asm.js is just a subset that is easy to optimize. Most of the work is done by LLVM and the JS VMs.

      Should we directly put LLVM in the browser as opposed to first compiling it to something portable like asm.js? It would be elegant, but also nonportable. (See PNaCl for an effort to make it portable.)

      > There is an obvious equivalence between any two Turing complete languages. That doesn't mean that that equivalence is elegant.

      I agree, and made a point in the article to talk about how a solution-from-scratch could be more elegant. But the question is if that elegance translates into benefits aside from aesthetics. I argued it does not, in this very specific case.

      Delete
    6. I gave you two IMO important ones: types other than int32/double and SIMD. How well does asm.js auto-vectorize after being baked into JS form for example?

      Delete
    7. As I mentioned in the article, SIMD will be challenging to do in JS. There is no simple solution.

      As for types other than int32 and double, the issue is with int64 and float32. This also has no simple solution, it will require new standardization work to fully optimize. I suspect SIMD is more important though based on the numbers I've seen so far (so I focused on that in the article and did not mention float32s and int64s), but it would depend on the workload of course.

      Delete
    8. "no room for truly big projects" seriously? Have you seen what we're doing with Rust?

      Delete
    9. "LLVM is a technology with proven potential. In fact, it's the LLVM-driven emscripten that makes asm.js viable in the first place, is it not? If that's not a strong sign that it's the LLVM technology and not the asm.js subset where the magic is, then I don't know what is. I admit I haven't worked with LLVM much directly, but it strikes me as exactly what asm.js pretends to be: a high-level assembly-like language."

      Speaking as someone who *does* work with LLVM on a daily basis, I think it would be unfortunate if it become part of the Web platform. There is a very good argument here:

      http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/043719.html

      I'll summarize:

      * LLVM was never designed to be portable. It has lots of unportable stuff in it (TargetData optimizations, target-specific calling conventions such as struct passing by-value/by-pointer encoded in the IR). Trying to make it portable is an inelegant hack, just like asm.js.

      * LLVM is slow for just-in-time compilation is huge codebases. Very slow. It simply wasn't designed for the workloads and user demands of the Web relating to compilation speed. JavaScript engines are much faster at compiling, because compilation time is so important on the Web.

      * Formalization and specification of LLVM IR would be very difficult. It's not defined formally.

      * Undefined behavior in LLVM IR is completely unspecified. We know what happens when undefined behavior creeps into the Web stack: content starts to rely on the quirks of a particular implementation, and nobody can upgrade without breaking content.

      * The LLVM instruction set, bitcode format, semantics, intrinsics, etc. are not stable and change all the time. This is because it is a compiler IR.

      Delete
    10. "the tools built upon it, such as XSLT, gained nothing from being written in XML, quite the opposite"

      I don't think you know how XSLT is used. It's quite common to use XSLT to generate XSLT (to generate XSLT) in software such as Apache Cocoon. This might sound convoluted but it's really just code generation which is a useful technique that can reduce the complexity of code -- it often allows programmers to maintain a separation of concerns by dynamically generating code that is tailored in some way. If XSLT was in a different language then code generation would have been significantly more complex.

      Delete
  3. I would settle for a "binary javascript" similar to binary xml (http://en.wikipedia.org/wiki/Binary_XML). It could simply map the javascript to a compact binary format.

    Using the prefix format similar to Scheme or postfix similar to Forth would eliminate the parsing requirements and make it a compact format without needing to be parsed. Throw in a standard hash for names of functions, variables, etc, and you could make it even more compact even.

    A binary representation could go a long way to being a "bytecode" without enforcing certain machine models.

    ReplyDelete
  4. The reason I mentioned pNaCl is that it aims to provide fast, portable, and safe execution for static languages (the same target domain as emscripten+asm.js). It seems odd to have written such a long post without mentioning a project that specifically aims to solve exactly the problem described.

    Speaking of Rust gives me an idea: put short-term hacks in Firefox and long-term solutions in Servo...

    ReplyDelete
    Replies
    1. True, I did mention the JVM and CLR but I could have also mentioned PNaCl, Flash, etc., since those also aim to run multiple static languages in a portable way.

      Definitely PNaCl (and Flash) is interesting, and aims to do solve a very similar problem. In terms of performance, I don't know where PNaCl currently stands (but I would be very curious to see numbers), and in terms of standardization, it has not even been specced as far as I know. So I am not sure what to comment about it, except that in general it is a very cool approach and I am impressed by the technical achievements of that team.

      Delete
    2. I suggest you watch David's I/O presentation on Thursday at 5:20. It should be live streamed.

      Delete
    3. How is pNaCl any different to the CLR or JVM? It appears to be, from a cynical point of view, a Google version of those two in an attempt for Google control.

      Is it just a power play? I doubt any of the other players could adopt it, the same way Google couldn't really adopt Flash or Silverlight (except in limited circumstances).

      Delete
  5. these questions strike me as essential:

    1. what do developers WANT to do?

    2. what CAN they do?

    (1) being a strategic issue, (2) being a tactical issue. mozilla seems to be projecting (2) into (1), i believe unwisely. consider the rate of churn on the web. have any of you maintained a site whose code remained stable over a four year period? in my experience, either the requirements and/or market changes, or simple bitrot sets in. so why the strategic bet on tools used to build mid-term code bases? well maybe es6, but it has taken far too long to arrive. c++, the world's most complex language, will likely complete two major standards revs before es6 arrives. meanwhile new tools like go, rust and even dart seem to be meeting the true demand of (1) - they are allowing developers to do what they WANT, which is build better things with better tools. no one seems to be griping about leaving perfectly adequate tools behind, they want a better future.

    i suppose my point being that fast js is still js, which is still a weak tool, and developers seem keen on having something better, and that continuity on the web is a non-issue, the web rebuilds itself every four or five years anyway.

    ReplyDelete
  6. what about this candidate from 2006
    http://www-archive.mozilla.org/projects/tamarin/

    that Mozilla never bothered to look into ?

    ReplyDelete
  7. Sticking to JavaScript is just by political reasons. The web standards don't even give a chance to try to establish a new bytecode. Standard guys will kill all new challenges with the call "non-standards are evil!" or by writing a long long entry like this. That's the point I anger. Mozilla should know Mozilla is a responsible player regarding this.

    Actually I really hope Firefox loses its market share (looks like it's ongoing: http://gs.statcounter.com/) and loses its power.

    And I will never use asm.js. Simply because it's too slow on non asm.js supported browsers. Epic Citadel at 20 fps on the latest Core i7-3770K is a joke. Slower than Flash Player!

    ReplyDelete
  8. Thank you for this excellent and very informative post! Your arguments made a very convincing case for asm.js (or some similar solution) as the browser byt code we're looking for.

    The last three comments made me sad and made me wonder if the commenters even took the time to read your post and try to understand your arguments... JavaScript is definitely not perfect, but is still evolving and asm.js is a very interesting answer to the "byte code" need.

    ReplyDelete
  9. JavaScript is the bytecode. The new trend in web development is to write in an altjs, such as ClojureScript, Opal, or Fay, that compiles to JavaScript. The result is efficient, and as obfuscatable/deobfuscatable as any bytecode ever was.

    ReplyDelete
  10. Claims Pages Documents: Insurance Claims Documents Forms
    for more information: claims pages forms

    ReplyDelete
  11. > It turns out that C++ compiled to JavaScript can run at about half the speed of native code

    How can this be considered remotely good enough? I'd be annoyed if apps were draining my battery 25% faster, never mind 200%.

    ReplyDelete
    Replies
    1. No one considers that good enough, that's why JS engine devs are working hard to push things further.

      You can track progress here:

      http://arewefastyet.com/#machine=12&view=breakdown&suite=asmjs-ubench

      http://arewefastyet.com/#machine=12&view=breakdown&suite=asmjs-apps

      Delete
  12. We already have the JVM that can take any language you throw at it.

    ReplyDelete