JavaScript

Async Functions: Delivering on Promises

Since Promises were first introduced to JavaScript, I have been frustrated by disappearing errors. Due to its dynamic nature, errors in JavaScript can show up much later in the dev process, sometimes making it all the way to production environments before they are first noticed. Many tools, like linters and type checkers, have been introduced to try to detect errors earlier. However, with Promises sometimes no error is ever thrown at all, your code just doesn’t work as you expect it to.

A small example to illustrate:

function getResult() {
  return new Promise(function(resolve, reject) {
    someAsyncFunction(resolve) // this function may throw an error asynchronously
  })
}

As you can see, reject never gets called. You must explicitly handle the error condition, or you won’t get so much as an error in the console. The error is thrown, but swallowed by the promise.

So why don’t I just handle the error? The point is not that errors cannot be handled. They certainly can be. The problem is that it is easier to do the wrong thing with promises. By default errors are caught and ignored.

Furthermore, async stack traces suck. This is a pain point for both promises and callbacks. Assuming an error occurs somewhere deep in asynchrony, the stack may only show you where the error was thrown, and have little or no context of where the function was called in the first place.

Both of these problems go away when using async function with await.

ES7 (or ES 2016 if you prefer) proposes a new keywords async and await. Under the hood, the new mechanism uses promises, but by default does the right thing.

The prior example becomes:

async function getResult() {
  return await someAsyncFunction()
}

This code assumes someAsyncFunction is also updated to the new syntax.

So what did we get?

  1. If something goes wrong an error is thrown, and ends up in the console if not handled.
  2. The stack trace is complete, so you can see what initiated the failed action.
  3. A try/catch around your getResult() call works.

Want to know more about how async and await work? You can read Jake Archibald gush over their many virtues on his blog, or if you like, you can check out the surprisingly readable proposal.

JavaScript

Greater than the sum of its components

Lately I’ve been working on a cool project written exclusively in JavaScript, with a Node.js & MongoDB back end, and a CommonJS Backbone front end. What I have found most fun so far is the synergy I get between certain components.

Templates

First off, I admit I’m a reinvent-the-wheel kind of engineer. I readily find some minor fault in existing solutions and decide I have to write my own. EJS is really great, especially for someone coming from a PHP background, who doesn’t think logic-less templates are better than sliced bread. However, I really needed templates that can run asynchronously, doing file or network io for includes and other such magic.

So, I made Stencil. I was able to make templates that compile without mucking up the line numbers, so debugging is very straight-forward. No exception rethrowing necessary. The very-important async use case was satisfied without making all templates forced to use the async pattern.

sync_result = sync_tpl(data); // works if no async code in template
async_tpl(data, function(err, async_result) { }); // always works

Where the whole becomes more than the sum of parts: A small snippet makes it so I can directly `require` my templates, and get back the function instead of the string:

require.extensions['.html'] = function(module, filename) {
	var fs = require('fs'), stencil = require('stencil-js'),
		opts = { id:filename, src:fs.readFileSync(filename, 'utf8') };
	module._compile(
		'module.exports=' + stencil.compile(opts, true) + ';',
		filename
	);
};

Now the rest of my code that uses templates doesn’t have to care that I use Stencil. You just `tpl = require(‘path/to/template.html’)`. This is possible because Node.js has an extensible require, and Stencil allows you to compile to a JavaScript string instead of just to a function. If I were to go back and change the templating system to EJS, Jade, or Mustache, I would only need to update this one little snippet.

Client CommonJS

I liked Node.js’s module system, and I didn’t want to have to replace it or use a separate system on the front end. Don’t get me started on mess of UMD. So, I created my own Modules library. You’ve heard about this before.

I got CommonJS modules to load (asynchronously) and run in the browser, so it was trivial to share code used on both ends. Again, line numbers weren’t munged in the server-side translation, so debugging works just like you always expect it to.

The library runs as a middleware for Express, enabling the reload functionality AMD lovers rave about, as well as standalone for concatenating and minifying bundles in the production build process. All with a client-side weight one-third that of AlmondJS, although that or RequireJS would also work on the front-end, since Modules still uses AMD as its transport format.

The real magic though, is that the Modules library has an option for translating certain types of files, giving us the same `require` functionality for our templates that we had on the server, and because the translation happens server side (or at build time), the client code can keep a Content Security Policy that disallows eval and unsafe inline code, as Stencil never has to be loaded in client code. (Lighter & more secure. Woohoo!)

app.use(require('modules').middleware({
	translate:{
		html:function tpl(name, file, src) {
			var opts = { id:name, src:src };
			return 'module.exports=' + stencil.compile(opts, true) + ';';
		}
	},
	root: './components/', // file root path
	path: '/module/', // url root path
	// ... other options
}));

Backbone

One magic thing that I got for free, is that Backbone and Underscore are already CommonJS compatible, so passing them through the same middleware just worked. Async, and countless other Node.js modules also just work.

Adding it all together

While I chose to write my own templating and module components, many other libraries include the little hooks that make these synergies possible. Each component individually is really nothing spectacular, but when you put them all together you get a product that is cohesive from front to back, and really fun to work on.

CSS

A Case Against Vendor Prefixes In CSS

I am a web developer, and a rather impatient one too. When a new feature is available in a few browsers, I want to use it. Most of the time, these features are either experimental or not finished with the standardization process when they are generally available. So, they are prefixed by the vendor. This is how the process was designed, so that is what vendors do.

Why do we prefix?

Prefixing is a kind of disclaimer for the feature. “Hey this is likely going to change, so don’t rely on it.” This seems in theory like a good way to go about it. If I am a browser maker, and I think of a cool new feature, like say, a gradient defined in css instead of an image, I really won’t know how good my design is until lots of people have used it and given feedback. Of course, if I am conscientious of the community and betterment of the web for everybody, I share my idea with other browser vendors and get their take on it too. Often we have competing ideas of how to implement it, so we want a way to distinguish between them. This competition is wonderful, and will lead us to a better solution. So, I make my background-image:-andy-linear-gradient(…), and my competitor makes their background:-steve-gradient(linear, color-stop(), …). Some people can try it out and it will work through the standards process and eventually everyone will have a background:linear-gradient(…) feature.

In theory it works out great, but what about in practice?

Here’s what actually happens, from a dev’s perspective.

My favorite browser, Shiny, implements a cool new feature: -shiny-gradient(). I play with it and think it’s really cool, but to be safe, I don’t actually use it in any production site. A year later, the other browser I support, Ice Monster has long since added their own -ice-gradient(). Two years later, pretty much every browser has their own prefixed version, even the Laggard browser.

Nice. It only took two years for the feature to be generally available, so I start to use it, even though my code looks like this:

  background-image: -shiny-gradient(linear, left top, left bottom, from(hsl(0, 80%, 70%)), to(#BADA55));
  background-image: -shiny-linear-gradient(top, hsl(0, 80%, 70%), #BADA55);
  background-image:    -ice-linear-gradient(top, hsl(0, 80%, 70%), #BADA55);
  background-image:    -lag-linear-gradient(top, hsl(0, 80%, 70%), #BADA55);
  background-image:     -my-linear-gradient(top, hsl(0, 80%, 70%), #BADA55);
  background-image:         linear-gradient(top bottom, hsl(0, 80%, 70%), #BADA55);

I don’t mind too much, because I use SASS or some other css pre-processor that actually takes care of all the prefixes and nuances. But this still bothers me in two important ways.

  1. My stylesheets are getting much heavier than they used to be, which is a concern because I want people to be able to view my site quickly even on mobile devices. Most of the syntax is exactly the same, but I still have to write it over and over again for each vendor.
  2. I have to opt in for each browser I want to get the feature. If a new browser becomes popular, it won’t get the gradients unless I go back and add yet another version.

I complain, but I keep doing it anyway. Two years later (Now four years since it was first introduced), The standard is still a draft, and because I want to support not just the bleeding edge browser, even when the standard finalizes I must leave all of the vendor prefixes indefinitely.

The problem gets more real with mobile.

Management asks for a ShinyPhone version of our application. They don’t care about Robot, even though it uses -shiny prefixes. I am given enough time to make the ShinyPhone version, but no time to even test in Robot. Eventually though, I manage to get it working because I own a Robot phone.

A few months later, Catchup Phone 7, Ice Monster Mobile, and Concert Mini are showing up on more phones. They have their prefixed version of all the great Shiny features I used, but because I didn’t know about them, the mobile application looks awful, and would take me several weeks to fix for each new phone. Management is not willing to spend that kind of time, so even though they have all the features my site is broken for them. Who will our customers blame? It works on ShinyPhone, so it must be that Ice Monster Mobile just isn’t as good. Ice Monster and other browsers get blamed for my site not working well there.

The Solution?

It is clear that if other browsers want to make themselves look good, they have to do more than just implement the feature. If they just use -shiny prefixes, that would make my application work far better, and therefore make their browser look good. But that completely undermines what we’ve learned in the browser wars, and goes against the reason for prefixing in the first place!

We don’t really have a good solution yet.

However, I have an idea I think worth talking about. What if the feature hadn’t been prefixed at all? I would have been less nervous to put it into production, because CSS simply doesn’t apply rules that aren’t implemented, and though it will likely change syntax, I can add new versions, and the one that is implemented will work. My stylesheet ends up more like this:

  background-image: gradient(linear, left top, left bottom, from(hsl(0, 80%, 70%)), to(#BADA55));
  background-image: linear-gradient(top bottom, hsl(0, 80%, 70%), #BADA55);

My application just works for every browser that supports the feature with little thought or effort from my part, and if the spec doesn’t change, which it actually doesn’t change very often, I am already done more than four years before it is standardized.

Benefits of prefixing:

  1. Sense of security for browser vendors, so they can change the implementation and make it better.
  2. Web developers should be aware that the feature isn’t really ready yet.
  3. Credit goes to the vendor who pioneered the feature.

Benefits of NOT prefixing:

  1. Less effort and maintenance for web developers trying to make their application (and browsers) look good. They don’t need to spend a lot of time researching which browsers support which features.
  2. Lighter weight stylesheets for everyone, especially mobile browsers.
  3. Browser vendors can focus on the features, not on evangelizing their prefix.
  4. No -webkit- prefixes being supported by Mozilla. Dang, I said it after trying so hard not to.

Honestly I do see the value in prefixing experimental and non-standardized features, but vendors have to break them often, and the standard needs to move faster if developers are realistically going to experiment with experimental features, and wait for the standard for production use.

Please feel free to disagree in the comments, check out the discussion going on in the w3c, or read up on other opinions. Better yet, get involved.

General

On Pattern Hating

I have long considered myself a Java hater. I now think it really has nothing to do with the language itself. Sure it was easy to point at slow performance (hasn’t been true for a long time now), or mourn for missing syntactic sugar (Pattern.compile(‘abc’, Pattern.CASE_INSENSITIVE) vs /abc/i), but really I think my problem with Java is really just a problem with the mindset I have observed in novice programmers (with Java usually being their first language).

The problem is with patterns.

Patterns are great. They provide a toolbox that can lead developers on the road to “best practice”. But…

Patterns are a poor substitute for problem solving.

It doesn’t matter if you know how to make a Singleton, even if you know when a Singleton is useful, if the problem at hand is improving report speed. You need to know math, you need to know computation, and you need to find the unnecessary work being done. It’s possible we’ll use a Singleton, but it won’t be the solution to the problem.

In an interview, if I ask for code to find the most common words in a bunch of text files, “public class WordRanker {” is unimportant. I’ve seen a few programmers struggle for the first few minutes to figure out if it should be a class, a function, or what language to use. But once, I was impressed by someone who quickly figured out what they wanted to do, and then said, “I’d google how to do that.”

The pattern is accidental complexity. Problem solving is essential complexity.

JavaScript

CommonJS in the Browser

I’ve been thinking a lot lately about how to use CommonJS modules in my web applications. I even started a repository on github for my implementation. As is apparent from searching, the task is non-trivial, and there are lots of people trying to do the same thing, and every one of them has a different idea about how it should work.

But WHY would you want to use CommonJS (formerly known as ServerJS) modules in a client environment?

Ideally you can share modules between client and server, but that requires you to use a server environment like node.js, which might make management really nervous. Even without sharing the CommonJS module system helps us avoid some annoyances in JavaScript development.

  • Each module has it’s own scope. I don’t have to manually wrap each file in a function to get a new variable scope. (Of course, to achieve this, the boilerplate is going to have to wrap each module’s code in a function anyway.)
  • Namespaces are only used in the require function, not everywhere in my code. Almost inevitably every web application I’ve worked in ends up using code like the following:
        var whatIWanted = new FormerCompanyName.Common.CoolLibrary.ConstructorName( More.namespace.chains.than.you.can.follow );
        // the rest of this file continues to use these ridiculously long namespaces
    

    Although I’m sure many will disagree with me, I much prefer the CommonJS way:

        var CoolModule = require('common/cool-library'),
            thingINeed = require('more/namespace/chains/than/you/can/follow'),
            whatIWanted = new CoolModule.ConstructorName(thingINeed);
        // the rest of the file is void of long namespaces
    

    And much more importantly, when I define a new module (or class as some insist on calling them):

        FormerCompanyName.Common.CoolLibrary.ConstructorName = function() {/* ... */};
        // versus
        exports.ConstructorName = function() {/* ... */};
        // or even
        module.exports = function() {/* ... */} // this case isn't in the spec, but I really like it, so I made sure my library can handle it.
    
  • Because you can also use relative module identifiers (“./sibling-module”, “../uncle-module”), when the company changes it’s name, it can be as simple as renaming a folder to update all the top-level module ids.
  • Additionally, modules can be included in the page in any order, and are only executed when first required, instead of all modules executing immediately upon inclusion, requiring the script order to be specific and fragile. If I add a new module using CommonJS, I can just append it to the end of the list, otherwise I have to make sure it is earlier in the page than whatever uses it, and after whatever it uses.

Okay, but how much work is it going to be?

Let’s walk through first what I wanted my server-side code to look like, then what it has to do to make it work on the other side.

As most of my server-side experience thus far has been in php, that’s the first language I’ve used in my implementation.

<!DOCTYPE html>
<html>
<head>
    <title>My Awesome Application</title>
    <link rel="stylesheet" href="awesome-styles.css" />
</head>
<body>
    <!-- blah blah blah -->
    <?= Modules::script() /* include all necessary script tags */ ?>
    <script>require('awesome').go()</script>
</body>
</html>

The Modules class will look for all js files in the folder you put it in, and any subfolders, and will id them by their path.

Yes, I am including every module, not actually checking dependencies. I refer you back to my previous post and say this is the simplest way, and if the caching headers are working, the experience won’t suffer. You are welcome to use one of the fantastic libraries that loads modules on-demand, if you disagree.

Hopefully that is all the server-side API you need to worry about, but there is more if you need it.

So what is that library doing to my poor scripts to make the CommonJS module environment?

I will explain in detail what goes into it in another post, but if you are daring, you can check out the source on github.

JavaScript

Some thoughts on Web 4.0

The web has undergone some significant changes since its inception. 1.0 consisted mostly of HTML documents, with simple CSS style, and little or no JavaScript interaction. 2.0 was the AJAX revolution, making dynamic sites with complex JavaScript. Some have suggested we are already in 3.0, with HTML5 and SVG well supported in the latest version of every major browser. What I’d like to talk about, is what I wish would come next.

As many who are immersed in front-end web development have noticed, HTML and SVG have different DOMs, different styles, and competing animation tools. They have been getting better, with HTML5’s inline SVG support, and browsers beginning to bring each markup’s features to the other, but the inconsistencies are still painful, and and they make implementation both for web and browser developers sub-optimal.

What I would love to see is something akin to the following document:

<!DOCTYPE html>
<html lang="en">
<head>
  <title>Fancy HTML+SVG</title>
  <link rel="stylesheet" href="styles.css" />
  <defs>
    <path id="logo" desc="My Fancy SVG Logo" d="M59,0 l69,69 h-15 l-44,44 v15 l-69-69 h15 l45-45 5,5 -45,45 44,44 44-44 -49-49 z  M59,44 c0-8,10-8,10,0 v40 c0,8-10,8-10,0 z" />
    <filter id="soft_blur"><feGaussianBlur in="SourceGraphic" stdDeviation=".5"/></filter>
  </defs>
  <link rel="shortcut icon" sizes="16x16 24x24 32x32 48x48" href="#logo" />
</head>
<body>
  <header>
    <a id="home" href="."><use href="#logo" /></a>
    <h1>The TaleCrafter's Scribbles<h1>
    <h2>notes about science, fiction, and faith… but mostly web development</h2>
  </header>
  <article>My Article text and images and stuff go here</article>
  <footer>Boring Legal and maybe locale selection in here</footer>
  <script src="script.js" async defer></script>
</body>
</html>

styles.css

  #logo { background:#111; } /* applies to everywhere <use>d, including favicon */
  #home { width:64px; height:64px; float:left; }
  #home path { transform:scale(.5); transition:background .5s ease; }
  #home path:hover { background:#0d0dc5; }
  h1 { filter:url(#soft_blur); transition:filter .5s linear; }
  h1:hover { filter:none; }
  /* ... lots more styles ... */

script.js

  document.querySelector('#home path').addEventListener('click', /* open menu or something useful */);

Summary of things that would be cool:

  • no need for foreignObject or anything like that, simply mix and match tags
  • put all the useful attributes in the same namespace (make use is useful without xlink: namespace)
  • css transitions & animations on svg styles (properties would also be nice)
  • defs and use in html documents
  • filters on html elements (Firefox is already working on this)
  • unify styles like background and fill
  • JavaScript DOM API identical

In short SVG and HTML would be one and the same. You would style both with the same css.

Some nitpicks:

  • I’m not sold on defining filters in markup, then using in style. It feels… odd. Why not define in style? (Oh no, that might be too much like IE’s filters! Gasp!)
  • Animating is still a crapshoot. It feels like it should be in JavaScript, but declarative syntax is so much simpler, and easier to optimize for browsers. Some SMIL animations work in some browsers. CSS animations are still nacent but promising. (Even IE looks like it might implement it in ‘native HTML5′. Sorry, couldn’t help myself.) Still, JavaScript is the only reliable way right now.

Let me hear an Amen, or let me know what I’m missing. Leave a comment and let’s talk about it.

JavaScript

Load only when needed, or Preload everything?

As JavaScript and web application best practices have formed over the last several years, there have appeared two contesting patterns in loading the scripts needed for an application:

Don’t load any JavaScript until you know you need it.

I usually feel like this is the way to go, because a lot of my code is specific to a particular widget or workflow. Why make the page take longer to load initially for something the user won’t do every visit? Just put in minimal stubs to load the full functionality once the user begins down that workflow, or interacts with the widget.

Pros:

  • Lighter initial page weight
  • Encourages functionally modular code
  • Memory performance boost (important if you have to support old browsers)
  • Speed performance boost (if done right)

Cons:

  • Adds additional complexity to code
  • Laggy performance (if done wrong)
  • Lots of HTTP requests

Combine and minify all JavaScript into one file loaded at the end of the html file.

You know beforehand what is going to be needed on each page, and YSlow warned you about too many HTTP requests. Bundle up all the scripts into one download which will be cached after the first page view.

Pros:

  • Easy to implement (lots of code will do it for you)
  • Initial page load (once cached) is really fast

Cons:

  • Load a lot more than usually necessary
  • Initial load can be much slower

So how do you know which pattern to follow? It depends! If your application is very complex, and large portions of the functionality are used infrequently, it makes a lot of sense to use an on-demand pattern. If your application is fairly simple, or if all of the code is likely to be used every time, then combining all of the scripts and including it from the start will be much easier.

I recently worked on a smaller application where I divided all the script into two files. The first was loaded initially, and provided enough functionality for the login dialog only. Upon successful login, the second script was loaded, which combined all of the remaining pieces of application.

The point I most want to make is this: Don’t just follow a pattern because it is a “best practice”. Take the time to figure out the best solution for your project.