Things I like about web apps

A native app with a complex user interface
A native app with a complex user interface

Web apps get a bad rap. They are sometimes slower than their native counterparts. They feel out-of-place if their UI varies greatly from the native platform.

But web apps also have things native apps are missing. Here are some of them.

  • I can find text. In any web app, I can press CTRL+F to find text on the page. I use this dozens of times daily. When I’m using a native app, I have to resort to scanning text manually.
  • I can login with one click. I use a password manager to keep track of my logins across thousands of sites. When I have to use a native app like Disney+ and I need to login, I don’t know my password, and password managers don’t generally work on native (desktop especially, but sometimes also on mobile). I have to launch my browser, launch my password manager in the browser and copy/paste my credentials.
  • I can select text. I often use text selection as a reading aid. I also use it to grab snippets of text, repost a quote, share it. With native apps, I can’t do this.
  • I can fill out forms automatically. Does that app need your name, address, phone, email, and more? With native apps, I have to type all that. With web apps, my browser or password manager can do it automatically for me.
  • I can share app content. You see something in the app and you want to share it. On the web, you can just send the link to friends. (Or even better, link to individual elements on the page, or even link to a section of text.) If it’s an image or video, you can right-click and grab the link to it, save it to disk, or send it to another app. But if it’s a native app, I can’t do those things.
  • I can pay for things without typing my credit card details. On the web when I go to pay for something, the browser or password manager can fill out your credit card details with a single click. On native, I’ll have to find my card and physically type the name, type of card, card number, expiration date, CVV.
  • I can open another part of the app without leaving my context. You’re deep in an app. You maybe browsed for movies, navigated to the 8th section, horizontally scrolled until you found the one you’re looking for. Before you hit play, you want to quickly check the name of that other movie you watched. You could click Recently Watched…but then you’ll lose your current context and have to do it all over again. Unless you’re a web app, then you can just Ctrl+click/middle click to open Recently Watched in a new tab while preserving your context in the current tab. Native apps don’t do this, forcing you to lose you context.
  • I can get to the content quickly. For all the talk about native performance, native apps often load slower than web apps. One reason for that may be because inefficiencies of higher abstractions in native development. But the web has something native does not have: multiple billion dollar companies competing to make it fast. Apple, Google, Microsoft, Mozilla, Samsung and others are investing heavily in making the web fast. The browser wars are survival of the fittest, and the resulting competition benefits end users. The same cannot be said of any native app framework, desktop or mobile.
  • I can block ads. For years I’ve used the Twitter web app both on mobile and desktop; just go to The Twitter web app has some problems. Once I thought I’d try the Twitter native app. Oooh, the scrolling seemed smoother. Oooh, I didn’t have the weird bug where I open an image, pinch-to-zoom, causing accidental refresh of my feed. Nice. Except…ads. Ads everywhere. I didn’t realize I was missing them because I had be using the web app, which lets me block ads. Increasingly, developers will publish a native version of their app to let them push more ads in front of more eyeballs. With native apps, I can’t block ads or tracking scripts.
  • I can scale text and media. Text too small? Need to zoom in on that image? Ctrl + Plus. Web apps let me do this, native apps don’t. Closest I can get on native is the OS-level zoom (e.g. Win+Plus) to get a closeup on the area near the cursor, which doesn’t often suit the task at hand.
  • I can keep using the app even if its busy.

    Or, “dog.exe has stopped responding”. Web apps have simpler threading models than native apps and this makes for UIs that tend to be responsive. On the web, when you need to do blocking work like network calls, it’s usually async by default (e.g. fetch(…), import(…), etc.). No need to schedule completion work on another thread; that’s built in. In native land, many developers just do the work on the UI thread, leading to unresponsive apps. Still others will try to coordinate their own threading, which can result in deadlocks, race conditions, or memory errors. While these are possible on the web, they’re much bigger footguns in the native world.
  • I can keep working even if something goes wrong. An unhandled exception occurred when you clicked a button? The native app may just crash, losing your work in the process. “Better die and start over than continue in an unknown state”, is the idealistic advice. On the web, that unhandled exception shows up in the developer console, the web app just keeps running and your work is preserved. This is the pragmatic outlook baked into the web itself: even malformed HTML documents still render successfully.

These are a few off the top of my head. Add any more in the comments, I’ll add them to the post.

Offline PWAs: My Adventure Beyond the Basics

You can build web apps (really, fancy websites) that work offline. When you build one of these things, you can put your device into airplane mode, open your browser and navigate to your URL, and it’ll just work. Pretty cool!

This is the promise of Progressive Web Apps (PWAs): you can build web apps that work like native apps, including ability to run offline.

I’ve built simple offline PWAs in the past, and things worked fairly well.

But this week I needed to do something trickier.

I run, a guitar chord chart site for Messianic Jewish music, and I needed to make it work offline. I would soon be traveling to a Messianic music festival where there’s little to no internet connection , and, as a guitar player myself, I wanted to bring up MessianicChords and access the chord charts even while offline.

So I figured, let’s make MessianicChords work entirely offline. Fun!

But this was trickier and a real test of the web platform’s offline capabilities:

  • Lots of content. My site has thousands of chord charts, totalling in the hundreds of MB. I can’t just cache everything all at once.
  • iframes don’t work with service worker caching. Chord charts are .docx and .pdf documents hosted on Google Drive (example) and rendered via iframe Service worker cache doesn’t work here because iframes start a new browsing context separate from your service worker.
  • Search and filtering. My guitar chord site lets users search for chord charts by name, artist, or lyrics, and lets users filter by newest or by artist. How can we do this while offline? Service worker cache is insufficient here.
  • HTML templates reused across URLs. My site is a single page app (SPA), where an HTML template (say, ChordDetails.html) is reused across many URLs (/chords/1, /chords/2, etc.) How can we tell service worker to use a single cached resource across different URLs?

These are the challenges I encountered. I solved them (mostly!), and that’s what this post is about. If you’re interested in building offline-capable web apps, you’ll learn something from this post.

The Goal

Since there are thousands of chord charts — several hundred MB worth of data — I don’t want to cache everything all at once.

Rather, my goal is to make the web app available offline by caching all static assets, then cache any chord charts viewed while online.

Put it another way, any chord charts viewed while online becomes available offline.

Making the web app load offline

This is the easy part. You add a service worker to your site, and configure your service worker to cache HTML, JS, CSS, web fonts, and other static assets.

Most “make your PWA offline-capable” articles on the web cover this — but only this.

However, even this “easy” part is fraught with gotchas. Cache invalidation? Cache expiration? Cache warming? Cache first or network first? Offline fallback? Revision-based URLs? etc.

Having implemented such service workers by hand in the past, I now recommend never doing that. 😂 Instead, use Google’s Workbox recipes in your service worker to handle all this for you.

Workbox recipes are small snippets of code that do common offline- and cache-related behaviors.

For example, here’s the static resource cache recipe:

import {staticResourceCache} from 'workbox-recipes';

What does staticResourceCache() do? It tells your service worker to respond to requests for static resources (CSS, JS, fonts, etc.), with a stale-while-revalidate caching strategy so those assets can be quickly served from the cache and be silently updated in the background. This means users get an instantaneous response from the cache. Meanwhile, the cached resource is refreshed in the background. Combine this with versioned resources (e.g. /scrips/main-hash123xyz.js) generated by Webpack, Rollup, or other build system, and you’ve got an automatic cache invalidation handled for you.

Workbox has a recipe for images (cache-first stategy with built-in expiration and cache pruning), a recipe for HTML pages (network-first with slow load time fallback), and more.

I use Workbox recipes in my service worker, and this makes my site work offline:

However, if we stopped there, you’ll notice that viewing a chord chart still fails:

Chord chart fails to load while offline

Well, crap.

We used Google Workbox and setup some recipes – shouldn’t the whole app work offline? Why is loading a chord chart failing?

iframes and service workers

The thousands of chord charts on MessianicChords are authored in .docx and .pdf format. There’s a reason for that: chord charts have special formatting (specifically, careful whitespacing) that needs to be preserved. Otherwise, you get a G chord playing over the wrong word, and now you’ve messed up your song:

Plus, the dozens of folks who contributed chord sheets to this prefer using these formats. 🤷‍♂️

Maybe in the future we migrate all of them to plain text/HTML; that would make them much easier to support offline. But for now, they use .docx and .pdf.

How do you display .docx and .pdf files on the web without using plugins or extensions?

With Google Docs iframes.

Google Docs does crazy work to render these on the web, no plugins required. (Under the hood, they’re converting these complex docs into raw HTML + CSS while meticulously preserving the formatting.)

So, MessianicChords embeds an iframe to load the .docx or .pdf in Google Docs.

What does that have to do with offline PWAs?

Your service worker can’t cache stuff from iframe loads. Viewing a chord chart on MessianicChords loads an iframe to a chord chart in Google Docs, but the request to this Google Docs URL isn’t cached by our service worker.


By design, iframes start a new browsing context. That means the service worker on MessianicChords doesn’t (and cannot) control the fetch requests the iframe makes to Google Docs.

End result is, my guitar chords site can’t load chord charts while offline. 😔

There is no magical way around this; it’s a deliberate limitation (feature?) of the web platform.

I considered some wild ideas to work around this. Could I statically cache the HTML and resources of the iframe and send it back with the chord chart from my own server? No, turns out Google Docs won’t work if not served from This and other wild ideas I tried.

I finally settled on something of a low-tech solution: screenshots.

I created a service that would load the Google Doc in the browser, take a screenshot of that, and send that screenshot back with the chord chart. (Thanks, Puppeteer!)

When you view the chord chart, we load and cache the screenshot of the doc. When you’re offline, we render the cached screenshot instead.

It works pretty good! Here’s online and offline for comparison:

Not bad!

This approach does lose some fidelity: the user can’t select and copy text from the offline cached image, for example. However, the main goal of offline viewing is achieved.

Searching, filtering, and other dynamic functionality

We now have a web app that loads offline (thanks to service worker + Google Workbox recipes). And we can even view chord charts offline, thanks to caching screenshots of the docs.

If we stopped here, we’d unfortunately be missing some things. Specifically:


Searching “blessing” on MessianicChords returns chord charts with “blessing” in the title, artist, or lyrics. How can we make this work offline?


MessianicChords lets users filter chord charts by artist or song name, or order by recent. How can we make this work offline?

Making this sort of dynamic functionality work offline required additional work.

For search, we need to be able to search artists, song names, and lyrics. While we’re storing request/response for chord charts in the service worker cache, this is insufficient for our needs.

Why insufficient? Well, looking things up in the service worker cache typically requires sending in a request or URL from which the response is returned. But in the case of search, we have no URL or request; we’re just looking for data.

While theoretically I could fetch all chord charts from the cache, it felt like using the wrong tool for the job.

I briefly considered using the cheap and simple localStorage. But given my requirements of potentially thousands of chord charts, it too felt like the wrong tool. I also remembered localStorage has some performance issues and is intended for a few, small items, not the kind of stuff I’m storing.

If service worker cache and localStorage are both out, what’s our remaining options?


This is a full-blown indexed database built into the web platform with a many-readers-one-writer model. Its API is, like service worker, rather low-level. But it’s built for storing large(r) items and accessing them in a performant way. The right tool for this job.

I set out on implementing an IndexedDB-backed cache for chord charts. The finished product is chord-cache.ts: about 300 lines of code implementing various functionality of MessianicChords: searching, filtering, sorting chord charts.

Once implemented, I set out to make all my pages offline-aware

  • The home page with search box would be updated to search the cache if we’re offline, or send an search request to the API if we’re online
  • The artists page would be updated to query the cache if we’re offline, or query the API if we’re online
  • …and so on

Except this is quite redundant. I realized, “Why am I coding this up for every page? Can we hide this behind a service?”

Good old object-oriented programming to the rescue. Since all API requests were made through my chord-service.ts, I changed that class’s behavior to be cache-aware and offline-aware. The following diagram explains the change:

Sorry for the poor man’s diagram, but you get the picture. I made chord-service.ts call a ChordBackend interface. That interface has 2 implementations: one that hits our IndexedDB cache and another that hits our API. The former is used when we’re offline, the latter when we’re online.

This way, we don’t have to update any of our pages. The pages just talk to chord-service.ts like usual. Yay for polymorphism.

This means that only chord-service.ts needs to know when we’re offline. How does that work?

navigator.onLine and other lies

My first thought would be to use the built-in navigator.onLine API. There’s even an online/offline events paired with it to be notified when your online status changes. Perfect!

Except, these don’t really work in practice.

The thing is, “are you online?” isn’t a super easy question to answer. What I found was if my phone had zero bars out in podunk rural Iowa, I wasn’t really online, but navigator.onLine reported true. Gah!

I also saw weird things when testing offline via browser dev tools. I hit F12 -> Network -> Offline. Surely that would put us in offline mode, yes? Nope. Sometimes (not always?) navigator.onLine returned a false positive.

Even putting my iPhone in airplane mode was no guarantee navigator.onLine would give me a correct result. 😔

The documentation for navigator.onLine warns you about some of this:

In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true. So while you can assume that the browser is offline when it returns a false value, you cannot assume that a true value necessarily means that the browser can access the internet. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always “connected.” Therefore, if you really want to determine the online status of the browser, you should develop additional means for checking.

In Firefox and Internet Explorer, switching the browser to offline mode sends a false value. Until Firefox 41, all other conditions return a true value; testing actual behavior on Nightly 68 on Windows shows that it only looks for LAN connection like Chrome and Safari giving false positives.

MDN for navigator.onLine

“You should develop additional means for checking [online status].” 🙄

Yeah, that’s kinda what I had to do. I built online-detector.ts which basically just makes a no-op call to my API. If it fails, we’re offline.

Do I need to keep this offline status up-to-date?

Nah. For my purposes, we detect once and go from there. You need to reload the app to see a different offline status. That works for me. But if you need something better, you could periodically hit your API and fire an event as needed.

Pseudo full-text search with IndexedDB

The last challenge I encountered was full-text search. Now that we have our chord-cache.ts which caches chord charts, I could fetch them by name. But the name had to be exact.

Searching for “King” would not match the chord chart, “He is King“. That’s because of the way IndexedDB works. When querying an index, you can query by range or by exact value.

Query by range doesn’t work for my purposes. I could match everything up to “King” or everything after “King”, but not sentences that contain “King”.

Additionally, queries are case-sensitive by default.

To compensate for this, I created some additional indexes that stored all the words in the song title. “He is King” would store “he” and “king”. Kind of a poor man’s case-insensitive full-text search.

When the user queries for “King”, I convert it to lower case, then asynchronously query all my indexes for “king”. I feed all the results into a Set to remove duplicate results. Bingo, we have working(ish) offline search.’

HTML template reuse

When I viewed my service worker cache (F12 -> Application -> Cache Storage), I noticed an oddity: every chord chart route (e.g. had cached the same HTML template.

That’s because as a Single Page Application (SPA), we use an HTML template for all chord chart detail pages, asynchronously loading in the actual chord chart details.

Not a huge deal, but this means that if I cache 1000 chord charts, I’ll have the exact same HTML template in the service worker cache for each one. Wasteful.

Is there a way to tell our service worker cache, “Hey, if you come across /chords/123, use the same cached result from /chords/678”?

It turns out that yes, this is possible and is quite easy with Google Workbox custom plugins. Specifically, you can pass a function to Google Workbox’s various recipes to tell it cache keys to use. This lets me use the same cache key for all my chord chart details:

// Page cache recipe:
  plugins: [{
      // We want to override cache key for
      //  - Artist page: /artist/Joe%20Artist
      //  - Chord details page: /ChordSheets/2630
      // Reason is, these pages are the same HTML, just different behavior.
      cacheKeyWillBeUsed: async function({request}) {
        const isArtistPage = !!request.url.match(/\/artist\/[^\/]+$/);
        if (isArtistPage) {
          return new URL(request.url).origin + "/artist/_";
        const chordDetailsRegex = new RegExp(/\/ChordSheets\/[\w|\d|-]+$/, "i");
        const isChordDetailsPage = !!request.url.match(chordDetailsRegex);
        if (isChordDetailsPage) {
          return new URL(request.url).origin + "/ChordSheets/_"

        return request.url;

Here we’re using the Google Workbox pageCache recipe, which hits the network and falls back to the cache if the network is too slow to respond.

We pass a custom plugin (really, just a function) to the recipe. It defines a cacheKeyWillBeUsed function, which Workbox uses to determine cache key. In it, I say, “If we’re navigating to chord details page, just use “ChordSheets/_” as the cache key.”

I do the same for artist page, for the same reason.

End result is, we avoid hundreds or thousands of duplicates for chord details and artist pages.


It’s possible to build great offline web apps. For most apps, service worker will suffice.

For my purposes, I needed to go further: adding an IndexedDB for my web app to enable full offline support for dynamic functionality like searching, filtering, and sorting.

iframes pose a difficulty for making your app available offline, as they start a new browsing context unintercepted by your service worker. If you own the domain you’re iframing, you can still make it work. For apps like mine that iframe content on domains I don’t own ( in my case), I had to workaround the issue by creating screenshots of documents and loading those while offline.

My app doesn’t let users create or update data, so I didn’t have to manage this while offline. But the web platform can handle that, too, via BackgroundSync.

Bottom line: making a PWA offline is entirely possible. I think it’s amazing I can write software that works online and offline whether on iOS, Android, Windows, Mac, and VR/AR devices, using just a single codebase built on web standards.

Getting Started with RavenDB Cloud

Spin up a RavenDB database quickly and cheaply. Create a highly-available database cluster in minutes. Try out the all new RavenDB Cloud for free at


RavenDB Cloud is a new database-as-a-service from the creators of RavenDB. No need to download any software, futz with port forwarding or virtual machine management: just visit and spin up a RavenDB instance.

RavenDB itself is a distributed database: while it can run as a single server, Raven is designed to work well in a cluster, multiple instances of your database that sync to each other and keep your app running even if a database server goes down. RavenDB Cloud builds on this and makes it super simple to spin up a database cluster to make your app more scalable and resilient.

In this article, I’ll walk you through both. We’ll start by spinning up a basic, single node instance in RavenDB Cloud. Then I’ll show you how to spin up a full cluster. All the while, we’ll be talking to our database from an ASP.NET Core web app. Let’s get started!

Spinning up a (free!) RavenDB Cloud instance

RavenDB Cloud offers a free instance. This is great for testing the waters and doing some dev hacking on Raven. I also use the free instance as my “local” database during development; it’s super easy to spin up an instance in RavenDB Cloud and point my locally running apps at. Let’s do that now.

Head over to and click “Get started free”:


You’ll register with your email address and then you’ll be asked what domain you’d like. This will be the URL through which you’ll access your databases. For this CodeProject article, I decided on a fitting name :


The next step is optional: billing information. If you’re just playing around with the free instance, you can click “skip billing information.” Now we’re presented with the final summary of our information. Click “Sign up” and we’re ready to roll:


Now we’re registered and we’ll receive our sign in link via email:


I’ve now got an email with a magic link that signs me in. Clicking that link takes me to my RavenDB Cloud dashboard:


Here we’ll create a new product: our free RavenDB Cloud instance.

You might wonder: what do we mean by “product” here – is it just a single database? A product here really means a cloud server(s) in which one or more databases reside. So, for example, our free instance can have multiple databases inside of it, as we’ll see shortly.

We’ll click “Add Product” and we’re asked what we want to create, with the default being the free instance:


If we change nothing on this screen, we’ll create a free instance, which is perfect for our initial setup.

Before we move on, notice we can create an instance either in Amazon’s or Microsoft’s cloud. We can also choose the region, for example, AWS Canada, or Azure West US:


We can also choose the tier: Free, Development, or Production. For our first example here, we’re going to go with the free instance.


It’s limited to a single node – no highly available cluster –10 GB of disk space, running on low-end hardware (2 vCPUs and 0.5 GB RAM). That’s fine for testing and for small projects; perfect for testing the waters. We’ll go ahead and choose the free instance and click Next.


Now we can specify the Display Name of the product; this is what we’ll see on our dashboard. Optionally, you can limit access to your database by IP range. Raven databases are secure by default using client certificates – we’ll talk about these more in a moment – so limiting access to an IP range isn’t strictly necessary, but adds an additional layer of security. For now, I’ll leave the IP range opened to the world.

We’ll click Next to see the summary of our RavenDB Cloud product, then click Create.


Once we click Create, I can see the free instance on my dashboard:


Here you can see our free instance spinning up in AWS, with a yellow “Creating” status. After a moment, it will finish spinning up and you’ll see the product go green in the Active state:


Congrats! You just spun up a free RavenDB Cloud instance.

We want to connect to this instance and create some databases. We can do that through code, but with RavenDB, we can also do it through the web using Raven’s built-in tooling, Raven Studio. You’ll notice the URLs section of the instance: that’s the URL that we can access our database server and create databases in.

But wait – isn’t that a security risk? If you try it right now in your browser, going to, you’ll be prompted for a security certificate. Where do you get the certificate? RavenDB Cloud generates one for you, and it’s available through the “Download Certificate” button:


Clicking “Download Certificate” will download a zip file containing a .pfx file – the certificate we need to access our database server:


(Yes, I really did pay for a registered copy of WinRAR)

You’ll see 2 .pfx files in there: one with a password, one without. You’re free to use either, but for our purposes, we’re going to use the one without a password. I’ll double-click free.clistctrl.client.certificate.pfx and click Next all the way through until I’m done; no special settings needed.

Once I’ve installed that certificate, I can now securely access my database using the URL listed in the dashboard:


Note: If you tried to access the URL before installing the certificate, you may run into an issue where your browser won’t prompt you for a certificate even after installing it. If that happens, simply restart your browser, or alternately, open the link in a private/incognito browser window.

Going to that URL in Chrome will prompt me to choose a certificate. I’ll choose the one we just installed, free.clistctrl. Hooray! We’re connected to our free RavenDB Cloud instance:


What we’re looking at here is RavenDB’s built-in tooling, Raven Studio. You can think of Raven Studio akin to e.g. SQL Management Studio: it’s where we can create databases, view data in our databases, execute queries, etc.

Our first step is going to be creating a database. I’m going to click Databases -> New database. I’m going to name it Outlaws, which we’ll use to store wonderful mythic outlaws of the wild west.


After clicking, we have a new database up and running in RavenDB Cloud – woohoo!

How does it look to connect to this database from, say, an ASP.NET Core web app? It’s pretty simple, actually. I’m going to do that now, just to show how simple it is.

While RavenDB has official clients for .NET, JVM, Go, Python, Node.js, and C++, I’m most familiar with C# and .NET, and I think Raven shines brightest on .NET Core. So, I’ve created a new ASP.NET Core web app in Visual Studio, then I add the RavenDB.Client NuGet package.

Inside our StartUp.cs, I initialize our connection to Raven:

That’s it! We can now store stuff in our database:

Likewise, we can query for our objects easily:

Saving and querying is a breeze – if you’re new to Raven, I encourage you to check out the awesome learning materials to help you get started.

One final note here: you can spin up multiple databases inside your RavenDB Cloud product. In this case, we’ve spun up a free instance and created a single Outlaws database inside it, but we can also can spin up other databases on this same free server as needed. Since the free tier supports 10GB disk space, we can spin up as many databases as can fit inside 10GB.

Spinning up a cluster in RavenDB Cloud

We just finished setting up a free instance in RavenDB Cloud, created a database, and connected to it, saved and queried some data.

All well and good.

But what happens when your database server goes down – does your app stop working? In our case, suppose AWS or Azure had a major outage, and our free instance goes offline. The result is that our app would stop working; it can’t reach the database.

RavenDB is, at its core, a distributed database: it’s designed to run multiple copies of your database in a cluster. A cluster is 3 or more nodes – database server instances – in which all the databases sync with each other. If one node goes down, the others still work, and your app will automatically switch to one of the online nodes. We call this transparent failover. When the node comes back online, all the changes that happened while it was offline get automatically synced to it.

A wonderful part of all this is you don’t have to do extra work to make this happen – you just setup your database as a cluster, and Raven takes care of the rest. The upside is your app is more resilient to failure: if one of your database nodes goes down, your app keeps working.

Let’s try that now using RavenDB Cloud.

We’ll go back to the RavenDB Cloud portal. We already have our CodeProjectFree product:


Let’s add a new product, we’ll call it CodeProjectCluster. I’ll click Add Product like before, but this time, we’re going to specify Production tier, which will setup our database in a 3 node cluster:


You’ll notice above we set Tier level to Production – this will setup our database in a cluster. We can tweak the CPU priority, cluster size, and storage size as needed; for this example we’ll leave these at the smallest sizes.

We’ll click next and set the instance names as before. Click finish, and we’re ready to roll: on our dashboard, we now see the cluster being created:


Notice that while our CodeProjectFree instance contains a single URL – there’s only 1 node – our new CodeProjectCluster contains 3 URLs, each one being a node in the cluster. The first node is node A, so its URL is, the second node is node B with a corresponding URL, and so on.

Once the cluster is finished creating, I’ll download and install the certificate as before:


Even though we have 3 nodes, we have a single certificate that will work for all 3 nodes. Once I’ve downloaded and installed it, I can click on any of the node URLs. Let’s try the 2nd node, Node B, which is at That URL takes me to Raven Studio for Node B:


Let’s go ahead and create a new database on this node. As before, I’ll click Databases -> New Database, and we’ll call it OldWestHeroes:


Notice we now have a Replication factor of 3. This means our OldWestHeroes database will be replicated – copied and automatically synchronized – across all 3 nodes. Once we click Create, the database will be created and we’ll see it on the node:


But since we’re running in a cluster, this database will also automatically be created on the other nodes. Notice under the database name, we see Node B, Node C, and Node A; Raven Studio is telling us this database is up and ready on all our 3 nodes.

Click the Manage group button, and we can see a visual description of our cluster topology:


On the right, we can see all 3 nodes are all replicating to each other. (If any nodes were offline, we would see the node here as red with a broken connection.)

This visual tells us our database is working on all 3 nodes in our cluster. It also shows ongoing tasks, such as automatic backups, hanging off the nodes responsible for the task. You’ll notice that “Server Wide Backup” task is hanging off Node A – RavenDB Cloud creates this task for us automatically. Database backups are free for up to 1GB, and for $1/GB/month beyond that.

We’re looking at Node B right now, but since all 3 nodes in our cluster are up and running, we should see the database on any of the other nodes.

Let’s try it! I’ll go to Node A, over at What do we see?


Yep! Our OldWestHeroes database has been automatically created on this node. And because these nodes are automatically synchronized, any changes we make to one node will show up automatically on the other nodes.

Let’s try that out too. Here on Node A, I’m going to click the OldWestHeroes database, then click New Document. I’ll create a new Cowboy document:


I’ll click save and our database now has a single Cowboy in it:


And because we’re in cluster, all the other nodes will now have this same document in it. Let’s head over to Node C, located at:


Sure enough, our Cowboy document is there. I can edit this Cowboy and change his name, and of course those changes will be synced to all the other nodes.

How does that change our app code? Going back to our C# web app, does our code have to change?

Not much! The code is essentially the same as before, but instead of specifying a single URL, we specify the URLs of all the nodes in our cluster:

This one-time initialization code in our Startup.cs file is the only code that has to change. The rest of the app code doesn’t change; we can still write objects and query them as usual:

Ditto for querying:

The upside for our app is even if some of the nodes in our cluster goes down – say, for instance, if there’s an Azure outage – our app keeps working, transparently failing over to other nodes in the cluster. No extra code needed!


In this article, I’ve shown how to quickly spin up a free database in RavenDB Cloud. We showed how it’s secured with a certificate and how we can connect to it from a C# web app. It’s quick and easy and great for testing the waters.

We also looked at something more production-ready: spinning up a 3 node cluster in RavenDB Cloud. We looked at how databases in the cluster are automatically synced with all the nodes. Any changes in one node are automatically and quickly replicated to the other nodes. We also looked at the minimal (2 additional lines) code required to change our web app from our free single-node instance to our 3 node cluster.

Apps running against a cluster are more resilient in the face of failure: your app keeps running even if some of the nodes go down. Raven allows reading and writing to any of the nodes of the cluster, keeping your app running in the face of hardware failure or network issues.

RavenDB Cloud lets you spin up a single RavenDB instance or full production cluster quickly and easily in the cloud. I hope this article has helped you understand what it is and why you’d use it, and I hope you’ll give it try today:

Towards Reactive Server Apps: a new hybrid web programming model pioneered by Blazor

Summary: Microsoft recently announced Razor Components (formerly Server-Side Blazor) will be shipping in .NET Core 3. Razor Components offer a new kind of programming model for the web, a blend of SPA and classic POST + Redirect + GET apps.


Reactive Server Apps: A fully-reactive web stack, where changes in the UI are automatically pushed down to the database, and changes in the database are automatically pushed up through to the DOM.

Imagine writing a web app that has virtually zero JavaScript, doesn’t need page reloads, and changes to the database are automatically and instantly reflected in the UI.

The Blazor project moves towards this ideal via its Razor Components (formerly Server-Side Blazor) programming model, what we might call the Reactive Server App model.

Today, most web apps fall into 2 categories:

  1. Classic POST + Redirect + GET (PRGs)
  2. Single Page Apps (SPAs)

POST+Redirect+GET is where you type in some data to a web page, hit submit button (POST), after a few seconds, a new page loads (Redirect) with the updated data (GET). You might call this classic web development.

Ordering tickets online is typically this kind of web app.

Single Page Apps (SPAs) are the thick-client model, only in the browser with JavaScript. You type in a URL, and the app loads. After that, everything seems to happen without page reloads: navigation, saving your data, loading data. This is because it’s all asynchronous; navigation and saving and UI interaction is done via asynchronous HTTP calls in the background.

Gmail is a Single Page App.

Blazor, the experimental framework intended to run C# in the browser via the new Web Assembly standard, is introducing a new hybrid model of web programming, what we might call Reactive Server Apps.

In v0.5, Blazor introduced Server-Side Blazor Apps. Where Blazor is Web Assembly runtime executing C# in the browser – a variant on SPA thick clients – Server-Side Blazor is a new programming model, where state lives on the server but is asynchronously pushed to the browser via web sockets.

Blazor went to v0.6 yesterday with a big announcement: Server-Side Blazor apps would be shipping separately from Blazor Web Assembly, shipping earlier (in .NET Core 3, due early 2019), and would be getting a new name: Razor Components.

How does this new web programming model work?

Like classic POST + Redirect + GET (PRG) apps, you writer server-side code (in our case, C# and .cshtml files). But unlike classic PRG apps, all the state is automatically shared between the server and the browser via web sockets.

So your app logic and state all exists on the server, but it’s automatically transferred to the browser. You click Save, and it feels like a SPA: things appear to happen instantaneously without POST + Redirect + GET reloads.

And because there’s a live connection between server and browser, apps no longer need to fetch data to display a page; the data can be pushed down to the browser in real time. This is why I call this hybrid model Reactive Server Apps.

If coupled with a database that supports reactive push notifications (e.g. RavenDB), this programming model may gain traction. I envision future web frameworks where the whole stack is reactive:

  1. Web app is reactive, pushing changes from the JavaScript data model to the DOM.
  2. Web server is reactive, pushing changes from the the server’s data model down to the browser.
  3. Database is reactive, pushing changes from the database to the web server.

Frameworks like React and Angular already do part #1: changes in the JavaScript model are automatically reflected in the DOM.

But these frameworks don’t do #2; instead they must poll the server for the latest data. And to do that, the server must poll the database.

Razor Components gives us step 2: the data model and state on the server is automatically pushed down to the browser. When the state or data changes, your server-side web app signals the browser and the corresponding browser components get asynchronously updated.

All that’s remaining is #3: a database that pushes changes to your web server. It just so happens we have such databases; I’m partial to RavenDB.

Imagine a server-side framework that has per-page state: here is the live data for /dashboard. As you’re looking at the page in your browser, changes to the data will appear automatically, because the database pushes changes to the server app, which pushes changes to the browser, which pushes changes to the DOM. The UI updates instantly and the developer didn’t have to do anything for that to happen.

Such a programming model has tangible improvements over both SPAs and PRGs.

Unlike SPAs, Reactive Server Apps would load fast because it’s essentially still a thin-client, no giant JS libraries or runtimes to pull in.

Reactive Server Apps would also be live: programmers don’t have to refetch data from the server to update the UI, and users experience immediate UI updates as the data changes; even if someone else changed the data they’re looking at.

Whereas SPAs rely heavily on Javascript, Reactive Server Apps will benefit from having more powerful programming languages to do the heavy lifting of development, easing development further.

And unlike PRGs, Reactive Web Apps don’t need page refreshes; the same benefit SPAs give. They also are “live” – changes to the database will flow to the DOM without having to re-query the database.


At least one downside is scalability; each app must have an open socket connection to the server. How well does this scale? This StackOverflow question seems to suggest it scales into the hundreds of thousands of concurrent users before having to add more servers.

Whether this model takes off or not is yet to be seen, but the idea is innovative. As a web developer, I’m excited to experiment with it.

I built a PWA and published it in 3 app stores. Here’s what I learned.

Summary: Turning a web app into a Progressive Web App (PWA) and submitting it to 3 app stores requires about a month of work, a few hundred dollars, and lots of red tape.

I recently published Chavah Messianic Radio, a Pandora-like music player, as a Progressive Web App and submitted it to the 3 app stores (Google Play, iOS App Store, Windows Store).




The process was both painful and enlightening. Here’s what I learned.


First, you might wonder, “Why even put your app in the app stores? Just live on the opened web!”

The answer, in a nutshell, is because that’s where the users are. We’ve trained a generation of users to find apps in proprietary app stores, not on the free and open web.

For my web app, there were 2 big reasons to get in the app store:

  1. User demand
  2. Web app restrictions by Apple hostile mobile platforms

User demand: My users have been asking me for years, “Is there an app for Chavah? I don’t see it in the store.”

They ask that because we’ve trained users to look for apps in proprietary app stores.

My response to my users has up until now been,

“Aww, you don’t need an app – just go to the website on your phone! It works!”

But I was kind of lying.

Real web apps only kinda-sorta work on mobile. Which brings me to the 2nd reason: web app restrictions by Apple hostile mobile platforms.

Mobile platform vendors, like Apple, are totally cool with apps that use your phone to its fullest. Access your location, play background audio, get your GPS coordinates, read all your contacts, play videos or audio without app interaction, read your email, intercept your typing, play more than one thing at a time, use your microphone and camera, access your pictures, and more.

Apple’s totally cool with that.

But only if you pay Apple $99/year for the privilege.

If you want to do any of those things in a regular old web app, well, goshdarnit, Apple won’t just deny you these things, it prevents you from even asking permission.

For my Pandora-like music player app, this horrible brokenness showed up in numerous ways.

From minor things like “iOS Safari won’t let you play audio without first interacting with the page” to major, show-stopping things like, “iOS Safari won’t let you play the next song if your app is in the background or if your screen is off.”

Oh, plus weird visual anomalies like typing in a textbox and seeing your text appear elsewhere on screen.

So, to make my HTML5 music app actually functional and working on people’s mobile devices, it was necessary to turn my PWA into an app in app store.

Barriers to entry

In the ideal world, publishing your web app to the app stores would look like this:

Your Web/Cloud Host or CI Provider

You’ve published a Progressive Web App. Publish to app stores?

☑ iOS App Store
☑ Google Play
☑ Windows Store

(Or alternately, as Microsoft is experimenting with, your PWA will just automatically appear in the app store as Bing crawls it.)

But alas, we don’t live in this ideal world. Instead, we have to deal with all kinds of proprietary native BS to get our web apps in the stores.

Each of the app stores has a barrier to entry: how difficult it is to take an existing web app and it in the app store.

I list some of the barriers below.


  • Apple: $99/year to have your app listed in the iOS app store.
  • Google: One-time $25 fee to list your app in the Google Play Store.
  • Microsoft: Free!

Don’t make me pay you to make my app available to your users. My app enriches your platform. Without good apps, your platform will be abandoned.

Apple used to understand this. When it first introduced the iPhone, Steve Jobs was adamant that HTML5 was the future and that apps will simply just be web apps. There was no native iPhone SDK for 3rd parties. Apple has since abandoned this vision.

Google asked for a token $25 one-time fee. Probably to avoid spammers and decrease truly junk apps from entering the store.

Microsoft seems determined to just increase the total number of apps in their app store, regardless of quality.

Winner: Microsoft. It’s hard to beat free.

Adding native capabilities

In an ideal world, I wouldn’t have to write a single extra line of code for my web app to integrate into the OS. Or, as Steve Jobs said back in 2007,

“The full Safari engine is inside of iPhone. And so, you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone. And these apps can integrate perfectly with iPhone services. They can make a call, they can send an email, they can look up a location on Google Maps.”

-Steve Jobs, 2007

For me, that means my web app plays background audio using standard HTML5 audio; that works just fine on all OSes.

My web app declares what audio is playing, and the OSes pick up on that, show currently playing song info on the lock screen.

My app controls audio using standard HTML5 audio API; the OS picks up on that and provides play/pause/next/volume/trackbar controls on the lock screen.

But sadly, we don’t live in this ideal world. All the things listed above don’t actually work out of the box on all 3 platforms.

My web app needs to play audio in the background. And load URLs from my CDN. Sounds reasonable, right? And bonus, how about showing currently playing song info on the lock screen? And controlling the audio (play/pause/next, etc.) from the lock screen? How hard is this?

Three very different approaches taken here:

  • Apple: We don’t give web apps a way to declare such capabilities; you’ll need to write a native wrapper (e.g. with Cordova) to interact with the OS.
  • Google: Web FTW! Let’s create a new web standard that shows audio & controls from the lock screen. Background audio? Sure, go ahead!
  • Microsoft: We’ll inject our proprietary API, window.Windows.*, into your JavaScript global namespace and you can use that to do the things you want to do.

Going into more details here for each store:

For iOS app store, does your web app need to play background audio? Use a Cordova plugin. Need to show currently playing song on the lock screen? Use a Cordova plugin. Need to control the currently playing song from the lock screen? Use a Cordova plugin. You get the idea. Basically, Cordova tricks Apple into thinking you’re a native app. And since you’re not a yucky web app, Apple lets you do all the things native apps can do. You just need native tricks – Cordova plugins – to let you do it.

For Google Play, it’s nice that I can just write JS code to make this work; no Cordova plugins required here. Of course, that JS won’t work anywhere except Chrome on Android…but hey, maybe one day (in an ideal world!) all the mobile browsers will implement these web APIs…and the world will live as one. I’m almost ready to bust out some John Lennon hippie utopia tunes.

For Windows Store, do you want to play background audio? Sorry! That is, unless you declare your intentions in our proprietary capabilities manifest file (easy) AND you implement this proprietary media interface using window.Windows.SystemMediaTransportControls (not so easy). Otherwise we’ll mute you when your app goes to the background.

Winner: Google. I want to be able to just write JavaScript, and let the OS pick up cues from my app.

Runner-up: Windows. I can still write plain old JavaScript, but I need to talk to a proprietary Windows JS API that was injected into my process when running on Windows. Not terrible.

Loser: Apple. They don’t care about web apps. Actually, it’s worse than that. It feels like they are actually hostile to web apps. iOS Safari is the new Internet Explorer 6. It has lagged behind in nearly every web standard, especially around Progressive Web Apps. This is probably for business reasons: web apps disrupt their $99/year + 33% in-app purchases racket. So to make my web app work on their platform, I have to basically pretend I’m a native app.

App Store Registration

Submitting your PWA to the app store requires registration, business verification, and more red tape. Here’s how the 3 app stores fared:

  • Apple: You must prove that you’re a legal, registered business. This verification isn’t done by us – but by a 3rd party, which may or may not know about your business.
  • Google: You want your app in our store? Cool by us.
  • Microsoft: You want your app in our store? Cool by us.

The biggest pain point for me was getting verified as a legal business by Apple.

First, I went to the site and registered for Apple’s Developer Program. I filled out my name and company information. (Aside: I guess Apple won’t let you submit an app unless you have a registered, legal company?)

I click next.

“The information you entered did not match your D&B profile.”


A bit of Googling showed that “D&B profile” is Dun and Bradstreet. I’ve never heard of this group before, but I find out that Apple is using them to verify you legal corporation details.

And apparently, my D&B profile didn’t match what I put in my Apple Dev registration.

I google some more and find the Apple dev forums littered with similar posts. Nobody had a good answer.

I contact Apple Dev support. 24 hours later, I’m contacted by email saying that I should contact D&B.


I decide to contact them…but Apple says it will take up to a few days for them to respond.

At this point, I’m thinking of abandoning the whole idea.

While waiting for D&B support to get back to me, I decide to go to the D&B site, verify my identity, and update my company information which, I assume, they had taken from government registration records.

Did I mention how sucky this is? I just want to list my existing web app in the store. Plz help.

I go to D&B to update my business profile. Surprise! They have a JavaScript bug in their validation logic that prevents me from updating my profile.

Thankfully, I’m a proficient developer. I click put a breakpoint in their JavaScript, click submit, change the isValid flag to true, and voila! I’ve updated my D&B profile.

Back to Apple Dev –> let’s try this again. Register my company…

“Error: The information you entered did not match your D&B profile.”


Talk to Apple again. “Oh, it may take 24-48 hours for the updated D&B information to get into our system.”

You know, because digital information can take 2 days to travel from server A to server B. Sigh.

Two days later, I try to register…finally it works! Now I’m in the Apple Developer program and can submit apps for review.

Winner: Google and Microsoft; both took all of 5 minutes to register.

Loser: The Apple Developer registration was slow and painful. It took about a week to actually get registered with their developer program. It required me contacting support from 2 different freaking companies. And it required me to runtime debug the JavaScript code on a 3rd party website just so that I can get past their buggy client-side validation, so that my info will flow to Apple, so that I can submit my app to the store. Wow, just…wow.

If there is any saving grace here for Apple, it’s that they have a 501c3 non-profit program, where non-profits can have their $99 annual fee waived. I took advantage of that. And perhaps this extra step complicated matters.

App Packaging, Building, Submitting

Once you have a web app, you have to run some magic on it to turn it into something you can submit for App Store review.

  • Apple: First, buy a Mac; you can’t build an iOS app without a Mac. Install XCode and these build tools and frameworks, acquire a certificate from our developer program, create a profile on a separate website called iTunes Connect, link it up with the certificate you generated on the Apple Dev center, then submit using XCode. Easy as one, two, three…thirty-seven…
  • Google: Download Android Studio, generate a security certificate through it, then package it using the Studio. Upload the package to Android Developer website.
  • Microsoft: Generate an .appx package using these free command line tools, or Visual Studio. Upload to the Microsoft Dev Center website.

The good news is, there’s a free tool to do the magic of turning your web app into app packages. That awesome free tool is called PWABuilder. It analyzes a URL, tells you what you need to do (e.g. maybe add some home screen icons to your PWA web manifest). And in a 3 step wizard, it lets you download packages that contain all the magic:

  • For Windows, it actually generates the .appx package. You can literally take that and submit it on the Windows Dev Center site.
  • For Google, it generates a wrapper Java app that contains your PWA web app. From Android Studio, you build this project, which generates the Android package that can be uploaded to the Android Dev Center site.
  • For Apple, it generates an XCode project which can be built with XCode. Which requires a Mac.

Once again, Apple was the most painful of all of these. I don’t have a Mac. But you cannot build the XCode project for your PWA without a Mac.

I don’t want to pay several thousand dollars to publish my free app in Apple’s app store. I don’t want to pay for the privilege of enriching Apple’s iOS platform.

Thankfully, MacInCloud costs about $25/month, and they give you a Mac machine with XCode already installed. You can remote into it using Windows Remote Desktop, or even via a web interface.

It wasn’t enough to just build the XCode project and submit. I had to generate a security certificate on the Apple Developer site, then create a new app profile in a separate site, iTunes Connect, where you actually submit the package.

And that wasn’t all: since Apple is hostile to web apps, I had to install some special frameworks and add Cordova plugins that allow my app to do things like to play audio in the background, add the current song to the lock screen, control the song volume and play status from the lock screen, and more.

This took at least a week of finagling to get my app into a working state before I could submit it to the app store.

Winner: Microsoft. Imagine this: you can go to a website that generates an app package for your web app. And if that’s not your thing, you can download command line tools that will do the job. GUI person? The free Visual Studio will work.

Runner-up: Google. Requires Android Studio, but it’s free, runs everyone, and was simple enough.

Loser: Apple. I shouldn’t have to buy a proprietary computer – a several thousand dollar Mac – in order to build my app. The Apple Dev Center –> iTunes Connect tangling seems like an out-of-touch manager’s attempt to push iTunes onto developers. It should simply be part of Apple Developer Center site.

App Testing

Once you finally did all the magic incantations to turn your existing web app into a mobile app package, you probably want to send it out to testers before releasing your app on the unwashed masses.

  • Apple: For testing, you have your testers install Test Flight on their iOS device. Then you add the tester’s email in iTunes Connect. The tester will get a notification and can test your app before it’s available in the app store.
  • Google: In Android Dev Center, you add email addresses of testers. Once added, they can see your alpha/beta version in the App Store.
  • Microsoft: I didn’t actually use this, so I won’t comment on it.

Winner: Toss up. Apple’s Test Flight app is simple and streamlined. You can control alpha/beta expiration simply on the admin side. Google wasn’t far behind; it was quite painless, not even requiring a separate app.

App Review

Once your app is ready for prime time, you submit your app for review. The review is done using both a programmatic checklist (e.g. do you have a launch icon?) and by real people (“your app is a clone of X, we reject it”)

  • Apple: Prior to submission, XCode warns you about potential problems during build. The human app review takes about 24-48 hours.
  • Google: Anybody home? Android Studio didn’t tell me about any potential problems, and my app was approved within minutes of submission. I don’t think a real human being looked at my app.
  • Microsoft: Upon submission, a fast programmatic review caught an issue pertaining to wrong icon formats. After passing, a human reviewed my app within 4 days.

Winner: Apple.

Sure, as a developer, I like the fact that my app was instantly in the Google Play store. But that’s only because, I suspect, it wasn’t actually reviewed by a human.

Apple had the quickest turnaround time for actual human review. Updates also passed review within 24 hours.

Microsoft was hit or miss here. The initial review took 3 or 4 days. An later update took 24 hours. Then another update, where I added XBox platform, took another 3-4 days.


It’s painful, and costs money, to take an existing PWA and get them functional on mobile platforms and listed in the App Store.

Winner: Google. They made it the easiest to get into the app store. The made it the easiest to integrate into the native platform, by attempting to standardize web APIs that OS platforms can pick up on (hello, lovely navigator.mediaSession)

Runner-up: Microsoft. They made it the easiest to sprinkle your web app with magic, turning it into a package that can be submitted to their store. (Can be done for free using the PWABuilder site!) Integrating with their platform means using the auto-injected window.Windows.* JavaScript namespace. Not bad.

Loser: Apple. Don’t require me to buy a Mac to build an iOS app. Don’t force me to use native wrappers to integrate with your platform. Don’t require me to screw around with security certificates; let your build tools make them for me, and store them automatically in my Dev Center account. Don’t make me use 2 different sites: Apple Dev Center and iTunes Connect.

Final thoughts: The web always wins. It defeated Flash. It killed Silverlight. It destroyed native apps on desktop. The browser is the rich client platform. The OS is merely a browser-launcher and hardware-communicator.

The web will win, too, on mobile. Developers don’t want to build 3 separate apps for the major platforms. Companies don’t want to pay for development of 3 apps.

The answer to all this is the web. We can build rich web apps – Progressive Web Apps – and package them for all the app stores.

Apple in particular has a perverse incentive to stop the progress of the web. It’s the same incentive that Microsoft had in the late ‘90s and early 2000s: it wants to be the platform for good apps. PWAs undermine that; they run everywhere.

My software wisdom is this: PWAs will eventually win and overtake native mobile apps. In 5-10 years, native iOS apps will be as common as Win32 C apps. Apple will go kicking and screaming, keeping iOS Safari behind the curve, blocking PWA progress where they can. (Even their recent “support” for PWAs in iOS Safari 11.1 actually cripple PWAs.)

My suggestion to mobile app platforms is embrace the inevitable and either automatically add quality PWAs to your app store, or allow developers to easily (e.g. free, and with 3 clicks or less) submit a PWA to your store.

Readers, I hope this has been helpful glance at PWAs in App Stores in 2018.

Have you submitted a PWA to an app store? I’d love to hear your experience in the comments.

RavenDB.Identity – an ASP.NET Core Identity provider for RavenDB

I’ve recently been doing some greenfield development with ASP.NET Core; I feel the framework, now in version 2, is sufficiently stablized to warrant new work in.

As my database of choice is RavenDB, I want to store my users, logins, and claims/roles in the database. For that, I’ve built the NuGet package RavenDB.Identity. While there are some existing packages for doing Identity with RavenDB, I designed my package to be easy to use and get-out-of-the-way:

Nice and simple, eh?

Once it’s setup, you use [Authorize] just like normal:

Likewise, signing in uses the built-in Identity APIs:

More details, and a sample project, over at the RavenDB.Identity GitHub repo. Enjoy!

Making TypeScript async/await play nice with AngularJS 1.x, even on old ES5 browsers

Summary: How to use TypeScript async/await with AngularJS 1.x apps, compiling down to ES5 browsers.

With TypeScript 2.1+, you can start using the awesome new async/await functionality today, even if your users are running old browsers. TypeScript will compile it down to something all browsers can run. Hot smile

I’m using Angular 1.x for many of my apps. I wanted to use the sexy new async/await functionality in my Angular code. I didn’t find any examples online how to do this, so I did some experimenting and figured it out.

For the uninitiated, async/await is a big improvement on writing clean async code:

Getting this to work with Angular is pretty simple, requiring only a single step.

1. Use $q for Promise

Since older browsers may not have a global Promise object, we need to drop in a polyfill. Fortunately, we can just use Angular’s $q object as the Promise, as it’s A+ compatible with the Promise standard.

This kills two birds with one stone: we now have a Promise polyfill, and when these promises resolve, the scope will automatically be applied.

2. You’re done! Sort of…

That’s actually enough to start using async/await against Promise-based code, such as ng.IPromise<T>:

Cool. We’re cooking with gas. Except…

Making it cleaner.

If you look at the transpiled javascript, you’ll see that TypeScript is generating 2 big helper functions at the top of every file that uses an async function:

Yikes! Sure, this is how the TypeScript compiler is working its magic: simulating async/await on old platforms going back to IE8 (and earlier?).

Love the magic, but hate the duplication; we’re generating this magic for every TS file that uses async functions. Ideally, we’d just generate the magic once, and have all our async functions reuse it.

We can do just that, explained in steps 3 and 4 below.

3. Use noEmitHelpers TS compiler flag

The TypeScript 2.1+ compiler supports the noEmitHelpers flag. This will isntruct TypeScript not to emit any of its helpers: not for async, not for generators, not for class inheritance, …nuttin’.

Let’s start with that. In my tsconfig.json file, I add the flag:

You can see we’ve set noEmitHelpers to true in line 8. Now if we compile our app, you’ll notice the transpiled UsersController.js (and your code files that use async functions) no longer has all the magic transpiler stuff. Instead, you’ll notice your async functions are compiled down to something like this:

Ok – that actually looks fairly clean. Except if you run it, you’ll get an error saying __awaiter is undefined. And that’s because we just told TypeScript to skip generating the __awaiter helper function.

Instead of having TypeScript compiler generate that in each file, we’re just going to define those magic helper functions once.

4. Use TsLib.js to define the magic helper functions once.

Microsoft maintains tslib, the runtime helpers library for TypeScript apps. It’s all contained in tslib.js, single small file (about 200 lines of JS) that defines all helper functions TypeScript can emit. I added this file to my project, and now all my async calls work again. Party smile

Alternately, you can tell the TypeScript compiler to do that for you using the importHelpers flag.

It’s almost 2017, and HTML5 audio is still broken on iOS

Summary: Back in 2012, I wrote that HTML5 audio is broken on iOS. Now as we enter 2017, it turns out things are still horribly broken. It is currently impossible to build a music player web app that works on iOS.

Update June 2017: I’ve filed a bug with the iOS WebKit team to address the major blocking issue. Here’s to hoping they fix it!

Last week I published a big update to Chavah Messianic Jewish radio. It’s an HTML5 music player in the vein of Pandora (users can thumb up songs, influencing what gets played) for the Messianic Jewish faith.


And thanks to the magic of the web and HTML5 audio, it works flawlessly on PCs, Macs, and Linux. Sweet!

What about mobile? Well, yeah. Umm. Apple’s mobile implementation of HTML5 audio is still busted.

After releasing the new version last week, my users reported things still busted on iOS devices:


Well, I did have a look. And what I found is that iOS still cripples web apps that use HTML5 audio.

I was hoping that the new (July 2016) relaxed web media restrictions in iOS 10+ would un-cripple HTML5 audio in iOS.

I was disappointed to find it’s still broken. It’s currently impossible to write a working audio player using modern web technologies.

Here are the working things:

  1. You can play MP3 audio* ** *** ****

* Only after user interaction
** Only while the page is active and in the fore
*** Only while the phone screen is on
**** There’s no way to keep the screen from turning off, so your audio will stop after the first song.

So yes, you can play audio (10 asterisks here). Or more precisely, you can play a single audio track, but not much more.

Here are the busted things:

  1. Minor: Audio doesn’t play until user interaction.
  2. Major: Audio can’t play the next track when the page is in the background. (JavaScript execution is suspended; no way to set audio.src for the next song.)
  3. Major, blocking: Audio can’t play the next track when the phone screen is off.
    (JavaScript execution is suspended when the phone screen is off; no way to set audio.src for the next song)

Details on each of these below:

Audio doesn’t play until user interaction

This is the most minor of the busted things. But it’s an artificial restriction by Apple, likely for user experience and battery life reasons.

None of the other operating systems, mobile or desktop, do this. So we have to have special handling for iOS to require the user to interact with the UI before playing.

As for battery life, 3d content, muted <video>, gifs, ads and more don’t require interaction before start. Why hurt real web apps and real users by singling out audio apps?

Audio can’t play next track in background

Common user scenario: they go to my web app, hit play, and the music starts streaming in. Now they switch over to Twitter. When the current song ends, the music just stops. No new track is played.


Upon investigation, HTMLAudioElement ended event is never fired. Why? Because Apple suspends all JavaScript execution in the name of performance.

This sounds good in theory: you’re not using a web site, so Safari will just stop executing any JavaScript since you’re not using it anyways.

But in practice, this kills real web apps on iOS. Music playing apps need to know when the song is finished in order to play the next song. So, we call

audio.addEventListener(“ended”, playNextSong)

But the playNextSong function never fires; JavaScript has been suspended, and users are disappointed.

Audio can’t play the next track when the phone screen is off.

The most common scenario for my app: user goes to my web app, the app starts playing music. Then, the user leaves her phone alone; perhaps she’s in the car driving while listening to the music.

After a short period of time, the phone screen turns off. The music keeps playing…until the current song ends. Once again, iOS has suspended JavaScript execution, resulting in the audio.ended event never firing, meaning I can’t set audio.src to the next song.

Upon further investigation, I tried to find out if it’s possible to prevent the screen from sleeping.

For a native iOS app, you can set application.idleTimerDisabled = YES. Super simple.

But for a web app? Nope, there’s no supported way to tell iOS, “Hey, keep the screen on, the user is in the middle of listening to music so don’t disrupt them.”

Some dated information on StackOverflow suggests looping a silent audio or video may prevent sleep. I built a little test app to try this out, and it appears to no longer work on iOS 10.

Additional answers on StackOverflow suggest pseudo page navigation every few seconds to prevent phone sleep/lock. I tried this as well, and it likewise appears to no longer work on iOS 10. The phone still sleeps even with page navigation going on.

Bottom line: there appears to be no way for a web app to prevent an iPhone from sleeping/locking.

And since sleeping/locking will cause a suspension of JavaScript execution, there’s no way to play the next song. End result is your audio web app stops playing audio, making it pretty useless.

Apple Webkit team action items

Here’s what we web developers need to make audio web apps a first class citizen on iOS:

  • Don’t suspend JavaScript execution for web apps playing audio. Don’t suspend JS execution if we’re in the background. Don’t suspend JS if the phone is locked. The user is playing our audio for a reason, don’t disrupt the user.

It’s not enough to let the audio finish and then suspend JS; this breaks the user experience and causes audio web apps to stop working.

This would solve the other problems.

Alternately, don’t sleep/lock the phone if an active web app is playing audio. While this alternate solution doesn’t fix all the problems, it would address the most blocking use case for using audio web apps unattended.

Enabling TypeScript 2.0 strict null checks in a Visual Studio project

TypeScript 2.0 beta was released today. Among the big list of 2.0 awesomeness, the headline feature is non-null types.

Non-null types is currently opt-in: you pass a –strictNullChecks flag to the compiler to enable non-null types.

Enabling this feature in a Visual Studio project wasn’t obvious to me. This post shows how to do that.

Once you’ve downloaded the TypeScript 2.0 beta for VS 2015, open your project in Visual Studio. It will prompt you to upgrade the project to the new TypeScript tooling.

Once you’ve done this, open your .csproj in a text editor. Scroll down and find the TypeScript property group:

You’ll want to add the line:


inside the property group, as shown above.

Save the .csproj, reload it in Visual Studio, and the feature will be enabled.

Bonus: this is also how you’d enable the new 2.0 compiler flags, noUnusedParameters and noUnusedLocals: