Edited post-HN: Wow, big response! WARNING: May contain traces of opinion, and naughty words like ‘shit’ and ‘internet explorer’.
This blog post was written in 2013, and many things have progressed. The predictions made have generally come true.
TL;DR:
- Use jQuery for what it is meant for, not everything.
- If you do stupid shit with the DOM, it will perform poorly.
- If your web-app is slow, it’s because of your code, not the DOM.
Whenever the native versus web argument arises, the biggest issue people usually have with web is performance. They quote figures like ‘60fps‘ and ‘scroll speed’, and, back when browsers were just getting good, i would say “OK, there’s work to be done, but they will get there”, which was met with anything from “Yeah maybe” to “No it will never happen!”.
ORLY?
Really? You don’t think it can be as fast? You think that your proficiency as a developer outstrips the devs working under the hood of chrome? firefox? Interne..wait yeah you’re probably better than IE devs… But for decent browsers, it is a ridiculous assumption that your average iOS or Android developer is going to be able to custom implement a function that a Chrome or Firefox developer provides to you via an API. Not to mention the fact that these abstractions are constantly being tweaked and reviewed by thousands of developers every day.
But really, this point is redundant anyway, because when you get down to it, even C is an abstraction on assembly, and if you are an Android developer, you’re coding against Dalvik… which is just as much of an abstraction as JavaScript is anyway! Abstraction is more likely to increase speed, because someone smarter than you has written the bit that needs to be fast.
But again, this isn’t the topic of the post.
People often throw around the statement “The DOM is slow”. This is a completely false statement. Utterly stupid. Look: http://jsfiddle.net/YNxEK/. Ten THOUSAND divs in about 200 milliseconds. For comparison, there are about 3000 elements on a Facebook wall, and you could, using JavaScript, render the whole thing in about 100 milliseconds. In a recent project, we needed a list of groups of items that would filter based on a users input. There were about 140 items, and it had to be fast on a phone. Currently, the implementation is as follows:
On filter change:
- Delete everything.
- Render everything from scratch.
And it has absolutely no perceivable lag what so ever on a mobile phone. None. This is pretty much the worst implementation possible, but it was all that was needed because it was easily fast enough. Why then, do people say that the DOM is slow?
I blame jQuery.
These days, when hiring for a front end developer position, there is a very simple way to determine how good a candidate is. “What are your thoughts on jQuery?”. This simple question will cull about 80% of applicants off the bat. Some cull-able answers (which I have actually seen):
- “I love JavaScript”. – Help them find the door, they will probably struggle on their own.
- “It’s a really helpful framework” – A victim of group ignorance.
- “It’s good but not as fast as HTML5” – Haha… get out.
I’m dead serious when I say these are all real answers from real people.
Yes, jQuery is the compatibility tool of choice these days, and for good reason; it solves the tedious problems it aims to solve in a really easy way. The problem is, most, and I do mean, the majority of ‘web developers’ think it is the only way to work with the DOM. It isn’t even that jQuery is slow; for what it does, it is quite fast. The issue is that a huge number of developers use it to do quite literally everything.
Actually, I blame developers…
This whole ‘DOM is slow’ is really just “I’m too stupid to know that what I’m doing is stupid”, and practically no one is immune. I once worked for a large ‘software’ company that came to me one day with a laggy screen in a ‘web app’ and said “fix this”. The bespoke JavaScript for this one screen was around 400 lines long, and had nearly 500 jQuery selections in it. Worse, many selections were being made within a loop, for no apparent reason! I ended up being able to delete about 90% of the selectors, by just assigning the selection to a variable, which resulted in a 1000% performance increase! Whenever you select an element (or elements) with jQuery, you’re instantiating a very feature-rich wrapper.
If all you are doing is making HTML elements, DO NOT USE JQUERY. About the worst possible way to create HTML is like this:
$(‘<div class=”imAMoron”><span>derp</span></span>’);
Not only will this be insanely slow in comparison to using the DOM method (document.createElement), it also leaves your code looking like a steaming pile of shit. If you need to make a lot of DOM in JavaScript, use a tool like laconic, or crel. And I do recommend creating lots of DOM in JavaScript, it is definitely faster than requesting the same structure from a server, then letting the browser parse the HTML to DOM, and then selecting the elements back out of the DOM to manipulate them.
What now?
First: ignore pretty much anything Facebook has to say about DOM performance. They really have no idea what they are talking about. Sencha was able to make a version of the Facebook mobile app with their framework, and it was FASTER than the official, native app, and Sencha isn’t even particularly fast.
Second: Stop using DOM libraries out of habit! If your target browser is IE8 or above, jQuery hardly provides any features that aren’t shipped on the host object anyway. document.querySelectorAll will pretty much do everything you need. jQuery is a tool, not a framework. Use it as such.
Third: Use document fragments. If you are changing the state of the UI in a loop, don’t change it while that DOM is in the document, because every change you make will cause a re-draw or re-flow.
Fourth: USE THE GODDAMN PROFILER! In almost every case, the bottleneck will end up being your code, not the DOM.
Sir! Yes, Sir!
Awesome post. Just wanna check something. Change that jsfiddle number to 10000 and tell me what you get in the alert box and how long it takes to load the result.
Awesome post. Just wanna check something. Change that jsfiddle number to 100000 and tell me what you get in the alert box and how long it takes to load the result.
(typo on prev comment)
It takes an extremely long time (minutes), it seems to be hitting some arbitrary implementation limit. That said, why would you ever need 100 thousand elements in the DOM at any one time?
Actually looks like it takes about 8 seconds to create all the DOM elements, but about 5 minutes for the browser to render that to the screen.
This completed in 568ms on mine. While the original 10000 divs were rendered in 56ms.
@Kory: Thanks for the interesting article. I found myself guilty on many counts.
You could also easily create a wrapper function like Jasmine libs does to do that in easier way. For myself, I think it’s so easy to do that in JS and totally browser-compatible that I never thought of doing with any libs, even before considering speed. After you do, I can see no advantages in using a lib to do that.
For the same concern you have, I never understood in first place, why there are so many simple blogs and websites that include jQuery, only to barely use it, where they could have used zepto or no library at all.
I won’t dispute the fact that DOM processing is fast. To process the document, you must first load it over the network. That is where native apps have an advantage. Native app I/O is local. Local I/O should be typically an order of magnitude or more responsive than remote I/O.
Not really, you still need to download a native app before you can use it, and its usually absurdly massive for what it does. You can quite easily cache web apps these days, and considering the app is likely to be less than 1MB on first load, your already saving huge bandwidth and load times.
That’s also ignoring the other benefits of web, like the fact that a web app will work pretty much everywhere. Develop once, deploy everywhere. Like Java, but good.
You are also forgetting that if you are targeting mobile apps, you can use available technology such as PhoneGap to load files directly from disk.
But the point here, is that Phone Gap is an option; you don’t really need it if you want to.
very nice
Interesting post but I’ve spotted an obvious flow in the reasoning: “JS isn’t slow, it is an abstraction tweaked by a large number of devs to be fast”, then “don’t use jQuery for everything” and later “use document fragment”.
Guess what, jQuery is an abstraction tweaked by a large number of devs to be fast and… it makes extensive use of document fragment.
jQuery main aim isn’t speed, it’s built to help ease the pain of developing across browser quirks. jQuery is quite fast for what it does, the issue is that people use if for the wrong reasons.
in IE 8 there is is no reason for
var matches = $(‘div.foo’); //jQuery
to be much slower than
var matches = document.querySelectorAll(“div.foo”); //native JavaScript
if so, jQuery needs to fixed
away from that, recommendation #3 is solid, and I recommend all to check their code to eliminate these unnecessary bottlenecks
typo: “if you are an Android developer, your coding against the Dalvik…”
“your” should be “you’re”
Hallelujah!
As a lone developer I often think these things and don’t know why others don’t do the same as I. I’ve never had a use for jQuery but everyone makes you think you must be doing something wrong. I could never understand why I had to use jQuery for the DOM when jQuery itself uses the DOM.
Never made any sense to me. Glad I’m not alone.
Much ❤
Your post contains a large piece of truth about what people shouldn’t do with jQuery. You are especially so right about documentFragments.
Something you did not mention and which is somehow a complement to what you say about DOM manipulation. We often use jQuery selectors to refer some DOM elements. And we often use jQuery pseudo selectors to determine some states directly readable from DOM element such as the checked property of an input. That’s a trap of about 100 times performance decrease.
And that’s the same for element visibility, selected state, named form fields, childNodes siblings parent attributes etc.
We surely have to learn or relearn native DOM capabilities, depending on the browsers we want to support of course.
This is GOLD!
It reminds me of Vanilla.JS
Probably the best piece of JS available!
Thanks for this little does of sanity.
I often get the feeling that I’m the last person on the face of planet that doesn’t immediately include jQuery by default (and then used it for every-single-thing), good to know I’m not alone in the wilderness.
Useful information. Fortunate me I found your site unintentionally, and I am surprised why this accident didn’t came about in advance! I bookmarked it.
“Premature optimization is the root of all evil.” – Donald Knuth
There’s nothing wrong with using jQuery for creating DOM structures. It might delay the rendering of the page with 0.02 milliseconds. Wow.
Did you know that if you use $(”), jQuery will use document.createElement(‘div’)?
Only when you use a loop running thousands of times, then you can look at optimizations. And then, you better use the profiler to find weak spots.
I’ve seen code written out with W3C DOM, with loops to assign classes etcetera, while it could’ve been ten times shorter with jQuery, which was available. “Why is this”, I asked. “For performance” was the response. This code was only run once. Well, that’s stupid. It’s a waste of programmers time.
I’m going to reiterate, I have no issues with jQuery, I have issues with jQuery being used for the wrong reasons.
As for DOM creation, even if you ignore speed, you still end up with disgusting, un-maintainable code when you create DOM in jQuery. Considering you can add crel to a project at the cost of a few bytes, you would have to be crazy not to.
That’s not necessarily true. This form isn’t too bad:
$(“<div/>”,{
id:’something’,
text: ‘Inner Text’,
…
});
You’re writing DOM, serialised as HTML, in your JavaScript, so that you can make DOM.
C’mon… really?
Just. Make. DOM.
This blog strips out HTML tags. Why?
Well, where you see $(”) I meant $(”) with the div tag inside.
This is a vanilla word-press install, so it does whatever it does out of the box.
Wrote this quick example of the difference between Javascript and jQuery DOM manipulation in response to Kory’s. Shows the vast difference.
Quick comparison of DOM manipulation between Javascript and jQuery here: http://jsfiddle.net/WinterJoey/VWkS2/
Fixed to have the fiddle include jQuery.
http://jsfiddle.net/VWkS2/2/
On my machine:
native DOM API: 70ms,
jQuery: 2360ms.
How about with $(‘body’) cached outside the loop, which seems like a reasonably simple optimisation to include?
Caching $(‘body’) outside the loop makes things a bit fairer and representative, and in Firefox 48, gives me 56ms for pure DOM and 712ms for jQuery. Still slower, but that simple caching change makes the jQuery code over 2.65 times faster.
It’s nearly 2017 and we’re still using jQuery?! I think it’s time to move on..
Actually, your jsFiddle test shows no mercy with jQuery DOM manipulation, catching $(‘body’) over each of the 10000 iterations, and appending the same way. However you directly append an HTML string which is faster than prebuilding elements.
That’s the times I get with my jsFiddle test : http://jsfiddle.net/alexis_tondelier/Fuexd/
– DOM API with documentFragments and innerHTML: 15.000ms
– DOM API with documentFragments: 13.000ms
– DOM API with clones and with documentFragments: 12.000ms
– DOM API without documentFragments: 14.000ms
– jQuery not optimized at all: 90.000ms
– jQuery worst than not optimized at all: 746.000ms
– jQuery not optimized at all and with prebuilding: 112.000ms
– jQuery not optimized at all and with clever prebuilding: 158.000ms
– jQuery optimized with documentFragments: 62.000ms
– jQuery optimized with documentFragments but slowered with clever calls: 107.000ms
From jQuery based web app optimization experience :
Say you generate HTML from server or client side templating engine, and wan’t to add it in your page, but you work with collections and one HTML string rendered per view class instance. Then say you wan’t your web app to be fluid on Internet Explorer.
1) you won’t catch the container with jQuery for each append call.
2) you won’t append the created HTML string for each view render and you will prefer storing detached DOM elements in a documentFragment, and finally append this fragment to the view collection container
And there, you have removed the main weakness of your collection view rendering code. And the final step to reach DOM API execution times will consist in these few steps :
1) seperate render and listeners initialization
2) attach listeners only on the collection container, so that you won’t lose execution time in attaching listeners. There you can use jQuery for event listeners!
3) not to use jQuery to pre build the element you append to your documentFragment and directly use DOM API appendChild functions (or insertBefore or what you want) with an element you built with document.createElement(nodeName) and its innerHTML live property.
For readibility concern, using templating engines, such as the Underscore.js one, client or server side, is a good choice (but, maybe not the best, to be compared).
For performance concern, you definitely reduce execution times by not falling into JavaScript/jQuery traps.
The next concern would be memory (especially working with dynamic collections).
Great post, thank you! Best post about the real performance of browser dom manipulation.
Thanks for this post.
I just started working with html5, javascript etc. a few months ago.
I have mainly been using Jquery, as that is what people used in most examples etc.For a specific list that I needed to build in javascript it took too long using jquery.
Learning from this article I now got the list loading more than double as fast as before 🙂
I have posted the same stuff for years, only to get downvoted on every amateur forum and reddit thread (same thing) while getting a “Well, duh” on professional boards. Such articles as this need to be promoted more often. I doubt it will get far anywhere else.
Your jsfiddle takes 50-60ms to run on my PC…
DOM manipulation for each new element is quite slow – preparing the elements and then manipulating DOM just once lowered the time to 15-20ms.
With 1M of elements difference was 4911ms:1558ms. http://jsfiddle.net/YNxEK/280/
I’ve made a audio visualizer project with hundreds of elements changing with music… It worked well on my Workstation, but it had terrible performance on slower devices… I’ve realized it was caused by thousands of DOM manipulations in each animframe iteration… I moved the styling of elements from each “style” param to one single dynamic – it resulted in a massive performance boost, since the DOM was not manipulated thousands of time each iteration, but only once. Sure this takes out the posibility of manipulating the DOM mid-way, but that is a question of algorithm design.