Category Archives: JavaScript

I Promise this was a bad idea

Promises are being added to everything. Browser API’s, the latest cool frameworks and libraries, browsers, and soon, maybe even node core.

In theory, I’m all for it. In theory, if we had a clean, simple way to describe asynchronous interactions, it’d be a great idea. In reality, Promises fall short of the mark, and their relationship with async-await is one for concern.

In theory…

The best parts of JavaScript have been the result of goat tracks, a path beaten by thorough usage. Promises started getting used in some form of anger in libraries like jQuery, and in the cases for which they were being used, they pretty much did everything developers needed. Want to load a file and then parse it? Easy. Want to load a file, then load a few files based on the first? No problem. Throw an error? Cool, just show an alert(). For trivial use cases, Promises look great.

But there are downsides that make promises not very pleasant to use.

Bother.

They catch thrown errors and handle them the same way as rejections. This is a massive pain if you want your server to crash when you’ve made a coding mistake, as you have to try and detect ‘good’ errors, like 404s to external services, or DB lookup misses, from ‘bad’ errors like “TypeError: 555t5tt55555555 is not a function” – something that occurs frequently when you own cats. There is discussion around modifying how try-catch works in JavaScript to work around this issue, but that should be a massive red-flag in its own right.

Promises also don’t interop with existing API’s nicely, you have to write boilerplate to interact with the normal ‘error, result’ pattern.

Promises run eagerly. By its self this isn’t an issue, there are cases where this is good, but often, lazily requesting asynchronous dependencies is preferable. Creating a lazy dependency graph also allows for easier debugging and description.

What then?

There is nothing special about Promises, they were just big first. There are many alternatives that do a better job. righto and pull-streams do everything Promises do, with fewer pitfalls. Righto interacts with err-backs, and promises, with no boilerplate, won’t catch thrown errors, and can even print a trace that shows the dependency graph for a flow, and which exact task rejected.

debug

But Mah Async-Await!

The biggest issue with not using Promises that get’s raised is that they are functionally required for async-await, but they aren’t. There is nothing specific about promises that makes them necessary for async-await to work; any async task wrapper could be used just as well. Async-await supporting Promises exclusively is a conflation of concerns.

If async-await was more generic, it would be possible to use it with a wide array of third party libraries that expose the right API. pull-streams, righto my-cool-lib could ALL be used with async-await.

GOAT TRACKS

This restriction is a major concern for me, as it lays down an iron path that restricts the formation of goat tracks. It enforces the usage of Promises, even though there may be better options out there. It restricts experimentation with alternatives.

I’m personally not a fan of the async-await pattern, as it attempts to hide asynchronicity with synchronous looking code, deterring understanding. If it is going to be used widely, it should be at least given the opportunity to be used in a variety of ways, to see what works best.

I’m anxious of the day we all look back and think “Oh no, that was a huge mistake.”

– Contains opinions.

Advertisement

How to do cross-domain async iFrame POST’s the easy/robust way

iFrames, Everyone loves them.

Haha! OK seriously now.

Every seasoned web developer has come up against the need to POST data asynchronously  to a different domain that does not accept cross-domain headers, and the usual solution is an iFrame to post into.

The code is usually something like this:

var myForm = crel('form', {'class':'braintreeForm', method : 'post', target:'myDodgyIframe'});
var myFrame = crel('iframe', {name:'myDodgyIframe'});

And then some code to make this disgusting pile of crap work:

var firstLoad;

myFrame.addEventListener('load', function(event){
    // the load event that gets fired on insertion
    if(!firstLoad){
        firstLoad = true;
        return;
    }

    var result;

    // the actual load event, probably.
    try{
        result = JSON.parse(event.target.contentDocument.documentElement.textContent);
    }catch(jsonError){
        //handle the error somehow
    }

    // use result if it parsed well.
});

An issue that you may encounter is that, if you are inserting the iFrame into the DOM arbitrarily, and not necessarily submitting it, the load event triggers every time it is inserted, not just once.

A solution to this came to me today:

myFrame.addEventListener('load', function(event){
    if(event.target.contentWindow.location.toString() === 'about:blank'){
        // ignorable load, return.
        return;
    }

    // useful load, use.
    var result;

    // the actual load event, probably.
    try{
        result = JSON.parse(event.target.contentDocument.documentElement.textContent);
    }catch(jsonError){
        //handle the error somehow
    }

    // use result if it parsed well.
    
    //cleanup
    event.target.contentWindow.location = 'about:blank';
});

This ignores any useless load events the frame might fire, without dodgy statefull ignore-the-first-load code.

Deploying a Web app in 14 days, No HTML.

Recently, at Sidelab, we developed and deployed signforms.com, an application for signing documents online, in 14 days. The application runs on a node.js back-end, using amazon web services. One interesting thing about the application is that it uses effectively no HTML.

There are a few static html files for the lander site, but once logged in, there is only one .html file. Other than script tags to load the application for the very first time, the file looks like this:

</head>
<body>
</body>
</html>

After that, no HTML is ever sent.

Don’t believe me? Sign up (its free currently), log in, then view source. You will see nothing but JSON.

Signforms uses no html

A typical page in SignForms.com

WAT!?

The front end of SignForms is build using an opensource framework called Gaffa. Once the application has been loaded, all navigation is achieved via requests for JSON resources.

Oh, so it’s client-side templating.

No, there are no templates, ever. When we work on SignForms, we never write HTML, or HAML, or any kind of ML. We write JavaScript ‘pages’ that are made up of ‘view items’ such as containers, textboxes, buttons etc. For cache-ability we serialize these pages to JSON for the application to consume.

In your browser, Gaffa takes the serialized page, inflates it into objects, then generates the DOM required to display the page, and adds all other functionality such as event handlers, page load, and autosave behaviours that are required.

Why?

I’m not a fan of HTML, it’s not something humans should write. One thing that people usually seem to forget is that HTML is a sibling to XML. The majority of people would prefer not to write XML, but have no issues writing HTML. This seems strange to me. The other thing people seem to forget is that HTML is a serialization format for DOM, it is verbose and inflexible.

One commonly seen flow of data for a web application is as follows:

Developer:

  • Write ‘Views’ in HTML
  • Add magic strings to tell the server where to put more strings

Server

  • Scan massive string, stringify data and concatenate the extra strings to the original string
  • Send massive string across the wire
  • Don’t cache it because it will change next time.

Client

  • Parse massive string
  • Correct any invalid HTML, take a best guess as to what it was supposed to be
  • Convert strings into DOM
  • Render the DOM

The way Gaffa handles navigation is different:

Developer:

  • Write code
  • File watcher auto-serializes to JSON

Server:

  • Serve JSON Page, cache for next time.
  • Serve DATA.

Client:

  • Parse JSON into ‘viewItems’.
  • Apply Data to ‘viewItems’.
  • render DOM (Or Canvas, or webGL, or google map markers, or any other logical view).

There are a few big advantages:

  • Only one HTML page is ever sent, once, and it’s tiny (2KB uncompressed)
  • Developers write code, not an XML derivative.
  • Application pages are cached indefinitely, and are tiny (17 KB uncompressed and unminified for a feature rich page)
  • The server only serves data, reducing load.
  • Page-weight is microscopic. (<0.5 MB per page for everything, including all images, css, and data)
  • Pages render stupid-fast (~1s for a complicated, data-bound page).

Gaffa achieves this by creating a declarative programming abstraction for the UI. If you want to display a textbox, and bind it’s value to the model, you would create a textbox viewItem:

var myTextbox = new textbox();

myTextbox.value.binding = '[myValue]';

Other views can bind to the same value in the model:

var myLabel = new label();
myLabel.text.binding = '[myValue]';

And the data can be manipulated before displaying it:

myLabel.text.binding = '(join " " "hello person, your value is" [myValue] "which is" [myValue].length "characters long")';

Then you add the viewItems to gaffa:

gaffa.views.add([myTextbox, myLabel]);

And you’re done.

Because viewItems are rendered as DOM, you can use CSS to style them as you normally would, and end up with a fast, feature rich web application in a very short period of time.

As to be expected from my previous post, this application forgoes the usual “Don’t think, include” librarys such as jQuery, bootstrap, etc, and instead uses small components that do exactly the job required.

A few libraries/technologies used:

JS:

CSS:

The DOM isn’t slow, you are.

Edited post-HN: Wow, big response! WARNING: May contain traces of opinion, and naughty words like ‘shit’ and ‘internet explorer’.

This blog post was written in 2013, and many things have progressed. The predictions made have generally come true.

TL;DR:

  • Use jQuery for what it is meant for, not everything.
  • If you do stupid shit with the DOM, it will perform poorly.
  • If your web-app is slow, it’s because of your code, not the DOM.

Whenever the native versus web argument arises, the biggest issue people usually have with web is performance. They quote figures like ‘60fps‘ and ‘scroll speed’, and, back when browsers were just getting good, i would say “OK, there’s work to be done, but they will get there”, which was met with anything from “Yeah maybe” to “No it will never happen!”.

ORLY?

Really? You don’t think it can be as fast? You think that your proficiency as a developer outstrips the devs working under the hood of chrome? firefox? Interne..wait yeah you’re probably better than IE devs… But for decent browsers, it is a ridiculous assumption that your average iOS or Android developer is going to be able to custom implement a function that a Chrome or Firefox developer provides to you via an API. Not to mention the fact that these abstractions are constantly being tweaked and reviewed by thousands of developers  every day.

But really, this point is redundant anyway, because when you get down to it, even C is an abstraction on assembly, and if you are an Android developer, you’re coding against Dalvik… which is just as much of an abstraction as JavaScript is anyway! Abstraction is more likely to increase speed, because someone smarter than you has written the bit that needs to be fast.

But again, this isn’t the topic of the post.

People often throw around the statement “The DOM is slow”. This is a completely false statement. Utterly stupid. Look: http://jsfiddle.net/YNxEK/. Ten THOUSAND divs in about 200 milliseconds. For comparison, there are about 3000 elements on a Facebook wall, and you could, using JavaScript, render the whole thing in about 100 milliseconds. In a recent project, we needed a list of groups of items that would filter based on a users input. There were about 140 items, and it had to be fast on a phone. Currently, the implementation is as follows:

On filter change:

  1. Delete everything.
  2. Render everything from scratch.

And it has absolutely no perceivable lag what so ever on a mobile phone. None. This is pretty much the worst implementation possible, but it was all that was needed because it was easily fast enough. Why then, do people say that the DOM is slow?

I blame jQuery.

These days, when hiring for a front end developer position, there is a very simple way to determine how good a candidate is. “What are your thoughts on jQuery?”. This simple question will cull about 80% of applicants off the bat. Some cull-able answers (which I have actually seen):

  • “I love JavaScript”. – Help them find the door, they will probably struggle on their own.
  • “It’s a really helpful framework” – A victim of group ignorance.
  • “It’s good but not as fast as HTML5” – Haha… get out.

I’m dead serious when I say these are all real answers from real people.

Yes, jQuery is the compatibility tool of choice these days, and for good reason; it solves the tedious problems it aims to solve in a really easy way. The problem is, most, and I do mean, the majority of ‘web developers’ think it is the only way to work with the DOM. It isn’t even that jQuery is slow; for what it does, it is quite fast. The issue is that a huge number of developers use it to do quite literally everything.

Actually, I blame developers…

This whole ‘DOM is slow’ is really just “I’m too stupid to know that what I’m doing is stupid”, and practically no one is immune. I once worked for a large ‘software’ company that came to me one day with a laggy screen in a ‘web app’ and said “fix this”. The bespoke JavaScript for this one screen was around 400 lines long, and had nearly 500 jQuery selections in it. Worse, many selections were being made within a loop, for no apparent reason! I ended up being able to delete about 90% of the selectors, by just assigning the selection to a variable, which resulted in a 1000% performance increase! Whenever you select an element (or elements) with jQuery, you’re instantiating a very feature-rich wrapper.

If all you are doing is making HTML elements, DO NOT USE JQUERY. About the worst possible way to create HTML is like this:

$(‘<div class=”imAMoron”><span>derp</span></span>’);

Not only will this be insanely slow in comparison to using the DOM method (document.createElement), it also leaves your code looking like a steaming pile of shit. If you need to make a lot of DOM in JavaScript, use a tool like laconic, or crel. And I do recommend creating lots of DOM in JavaScript, it is definitely faster than requesting the same structure from a server, then letting the browser parse the HTML to DOM, and then selecting the elements back out of the DOM to manipulate them.

What now?

First: ignore pretty much anything Facebook has to say about DOM performance. They really have no idea what they are talking about. Sencha was able to make a version of the Facebook mobile app with their framework, and it was FASTER than the official, native app, and Sencha isn’t even particularly fast.

Second: Stop using DOM libraries out of habit! If your target browser is IE8 or above, jQuery hardly provides any features that aren’t shipped on the host object anyway. document.querySelectorAll will pretty much do everything you need. jQuery is a tool, not a framework. Use it as such.

Third: Use document fragments. If you are changing the state of the UI in a loop, don’t change it while that DOM is in the document, because every change you make will cause a re-draw or re-flow.

Fourth: USE THE GODDAMN PROFILER! In almost every case, the bottleneck will end up being your code, not the DOM.