Web apps get a bad rap. They are sometimes slower than their native counterparts. They feel out-of-place if their UI varies greatly from the native platform.
But web apps also have things native apps are missing. Here are some of them.
I can find text. In any web app, I can press CTRL+F to find text on the page. I use this dozens of times daily. When I’m using a native app, I have to resort to scanning text manually.
I can login with one click. I use a password manager to keep track of my logins across thousands of sites. When I have to use a native app like Disney+ and I need to login, I don’t know my password, and password managers don’t generally work on native (desktop especially, but sometimes also on mobile). I have to launch my browser, launch my password manager in the browser and copy/paste my credentials.
I can select text. I often use text selection as a reading aid. I also use it to grab snippets of text, repost a quote, share it. With native apps, I can’t do this.
I can fill out forms automatically. Does that app need your name, address, phone, email, and more? With native apps, I have to type all that. With web apps, my browser or password manager can do it automatically for me.
I can share app content. You see something in the app and you want to share it. On the web, you can just send the link to friends. (Or even better, link to individual elements on the page, or even link to a section of text.) If it’s an image or video, you can right-click and grab the link to it, save it to disk, or send it to another app. But if it’s a native app, I can’t do those things.
I can pay for things without typing my credit card details. On the web when I go to pay for something, the browser or password manager can fill out your credit card details with a single click. On native, I’ll have to find my card and physically type the name, type of card, card number, expiration date, CVV.
I can open another part of the app without leaving my context. You’re deep in an app. You maybe browsed for movies, navigated to the 8th section, horizontally scrolled until you found the one you’re looking for. Before you hit play, you want to quickly check the name of that other movie you watched. You could click Recently Watched…but then you’ll lose your current context and have to do it all over again. Unless you’re a web app, then you can just Ctrl+click/middle click to open Recently Watched in a new tab while preserving your context in the current tab. Native apps don’t do this, forcing you to lose you context.
I can get to the content quickly. For all the talk about native performance, native apps often load slower than web apps. One reason for that may be because inefficiencies of higher abstractions in native development. But the web has something native does not have: multiple billion dollar companies competing to make it fast. Apple, Google, Microsoft, Mozilla, Samsung and others are investing heavily in making the web fast. The browser wars are survival of the fittest, and the resulting competition benefits end users. The same cannot be said of any native app framework, desktop or mobile.
I can block ads. For years I’ve used the Twitter web app both on mobile and desktop; just go to Twitter.com. The Twitter web app has some problems. Once I thought I’d try the Twitter native app. Oooh, the scrolling seemed smoother. Oooh, I didn’t have the weird bug where I open an image, pinch-to-zoom, causing accidental refresh of my feed. Nice. Except…ads. Ads everywhere. I didn’t realize I was missing them because I had be using the web app, which lets me block ads. Increasingly, developers will publish a native version of their app to let them push more ads in front of more eyeballs. With native apps, I can’t block ads or tracking scripts.
I can scale text and media. Text too small? Need to zoom in on that image? Ctrl + Plus. Web apps let me do this, native apps don’t. Closest I can get on native is the OS-level zoom (e.g. Win+Plus) to get a closeup on the area near the cursor, which doesn’t often suit the task at hand.
I can keep using the app even if its busy.
Or, “dog.exe has stopped responding”. Web apps have simpler threading models than native apps and this makes for UIs that tend to be responsive. On the web, when you need to do blocking work like network calls, it’s usually async by default (e.g. fetch(…), import(…), etc.). No need to schedule completion work on another thread; that’s built in. In native land, many developers just do the work on the UI thread, leading to unresponsive apps. Still others will try to coordinate their own threading, which can result in deadlocks, race conditions, or memory errors. While these are possible on the web, they’re much bigger footguns in the native world.
I can keep working even if something goes wrong. An unhandled exception occurred when you clicked a button? The native app may just crash, losing your work in the process. “Better die and start over than continue in an unknown state”, is the idealistic advice. On the web, that unhandled exception shows up in the developer console, the web app just keeps running and your work is preserved. This is the pragmatic outlook baked into the web itself: even malformed HTML documents still render successfully.
These are a few off the top of my head. Add any more in the comments, I’ll add them to the post.
Spent the last 4 days making a PWA offline-capable.
Tricky, as itâs a viewer of cloud documents. (Guitar chord charts)
Workbox recipes, custom Workbox plugins, IndexDB to mirror backend API, pseudo full text search via IDB indexes, phew!
Learned a lot! Blog forthcoming.
â Judah Gabriel 🇮🇱 ŚŚŚŚŚ ŚŚŚšŚŚŚ (@JudahGabriel) June 15, 2022
You can build web apps (really, fancy websites) that work offline. When you build one of these things, you can put your device into airplane mode, open your browser and navigate to your URL, and it’ll just work. Pretty cool!
This is the promise of Progressive Web Apps (PWAs): you can build web apps that work like native apps, including ability to run offline.
I’ve built simple offline PWAs in the past, and things worked fairly well.
But this week I needed to do something trickier.
I run MessianicChords.com, a guitar chord chart site for Messianic Jewish music, and I needed to make it work offline. I would soon be traveling to a Messianic music festival where there’s little to no internet connection , and, as a guitar player myself, I wanted to bring up MessianicChords and access the chord charts even while offline.
So I figured, let’s make MessianicChords work entirely offline. Fun!
But this was trickier and a real test of the web platform’s offline capabilities:
Lots of content. My site has thousands of chord charts, totalling in the hundreds of MB. I can’t just cache everything all at once.
iframes don’t work with service worker caching. Chord charts are .docx and .pdf documents hosted on Google Drive (example) and rendered via iframe Service worker cache doesn’t work here because iframes start a new browsing context separate from your service worker.
Search and filtering. My guitar chord site lets users search for chord charts by name, artist, or lyrics, and lets users filter by newest or by artist. How can we do this while offline? Service worker cache is insufficient here.
HTML templates reused across URLs. My site is a single page app (SPA), where an HTML template (say, ChordDetails.html) is reused across many URLs (/chords/1, /chords/2, etc.) How can we tell service worker to use a single cached resource across different URLs?
These are the challenges I encountered. I solved them (mostly!), and that’s what this post is about. If you’re interested in building offline-capable web apps, you’ll learn something from this post.
The Goal
Since there are thousands of chord charts — several hundred MB worth of data — I don’t want to cache everything all at once.
Rather, my goal is to make the web app available offline by caching all static assets, then cache any chord charts viewed while online.
Put it another way, any chord charts viewed while online becomes available offline.
Making the web app load offline
This is the easy part. You add a service worker to your site, and configure your service worker to cache HTML, JS, CSS, web fonts, and other static assets.
Most “make your PWA offline-capable” articles on the web cover this — but only this.
However, even this “easy” part is fraught with gotchas. Cache invalidation? Cache expiration? Cache warming? Cache first or network first? Offline fallback? Revision-based URLs? etc.
Having implemented such service workers by hand in the past, I now recommend never doing that. 😂 Instead, use Google’s Workbox recipes in your service worker to handle all this for you.
Workbox recipes are small snippets of code that do common offline- and cache-related behaviors.
import {staticResourceCache} from 'workbox-recipes';
staticResourceCache();
What does staticResourceCache() do? It tells your service worker to respond to requests for static resources (CSS, JS, fonts, etc.), with a stale-while-revalidate caching strategy so those assets can be quickly served from the cache and be silently updated in the background. This means users get an instantaneous response from the cache. Meanwhile, the cached resource is refreshed in the background. Combine this with versioned resources (e.g. /scrips/main-hash123xyz.js) generated by Webpack, Rollup, or other build system, and you’ve got an automatic cache invalidation handled for you.
Workbox has a recipe for images (cache-first stategy with built-in expiration and cache pruning), a recipe for HTML pages (network-first with slow load time fallback), and more.
I use Workbox recipes in my service worker, and this makes my site work offline:
However, if we stopped there, you’ll notice that viewing a chord chart still fails:
Well, crap.
We used Google Workbox and setup some recipes – shouldn’t the whole app work offline? Why is loading a chord chart failing?
iframes and service workers
The thousands of chord charts on MessianicChords are authored in .docx and .pdf format. There’s a reason for that: chord charts have special formatting (specifically, careful whitespacing) that needs to be preserved. Otherwise, you get a G chord playing over the wrong word, and now you’ve messed up your song:
Plus, the dozens of folks who contributed chord sheets to this prefer using these formats. 🤷♂️
Maybe in the future we migrate all of them to plain text/HTML; that would make them much easier to support offline. But for now, they use .docx and .pdf.
How do you display .docx and .pdf files on the web without using plugins or extensions?
With Google Docs iframes.
Google Docs does crazy work to render these on the web, no plugins required. (Under the hood, they’re converting these complex docs into raw HTML + CSS while meticulously preserving the formatting.)
So, MessianicChords embeds an iframe to load the .docx or .pdf in Google Docs.
What does that have to do with offline PWAs?
Your service worker can’t cache stuff from iframe loads. Viewing a chord chart on MessianicChords loads an iframe to a chord chart in Google Docs, but the request to this Google Docs URL isn’t cached by our service worker.
Why?
By design, iframes start a new browsing context. That means the service worker on MessianicChords doesn’t (and cannot) control the fetch requests the iframe makes to Google Docs.
End result is, my guitar chords site can’t load chord charts while offline. 😔
There is no magical way around this; it’s a deliberate limitation (feature?) of the web platform.
I considered some wild ideas to work around this. Could I statically cache the HTML and resources of the iframe and send it back with the chord chart from my own server? No, turns out Google Docs won’t work if not served from docs.google.com. This and other wild ideas I tried.
I finally settled on something of a low-tech solution: screenshots.
I created a service that would load the Google Doc in the browser, take a screenshot of that, and send that screenshot back with the chord chart. (Thanks, Puppeteer!)
When you view the chord chart, we load and cache the screenshot of the doc. When you’re offline, we render the cached screenshot instead.
It works pretty good! Here’s online and offline for comparison:
Not bad!
This approach does lose some fidelity: the user can’t select and copy text from the offline cached image, for example. However, the main goal of offline viewing is achieved.
Searching, filtering, and other dynamic functionality
We now have a web app that loads offline (thanks to service worker + Google Workbox recipes). And we can even view chord charts offline, thanks to caching screenshots of the docs.
If we stopped here, we’d unfortunately be missing some things. Specifically:
Search:
Filtering:
Making this sort of dynamic functionality work offline required additional work.
For search, we need to be able to search artists, song names, and lyrics. While we’re storing request/response for chord charts in the service worker cache, this is insufficient for our needs.
Why insufficient? Well, looking things up in the service worker cache typically requires sending in a request or URL from which the response is returned. But in the case of search, we have no URL or request; we’re just looking for data.
While theoretically I could fetch all chord charts from the cache, it felt like using the wrong tool for the job.
I briefly considered using the cheap and simple localStorage. But given my requirements of potentially thousands of chord charts, it too felt like the wrong tool. I also remembered localStorage has some performance issues and is intended for a few, small items, not the kind of stuff I’m storing.
If service worker cache and localStorage are both out, what’s our remaining options?
This is a full-blown indexed database built into the web platform with a many-readers-one-writer model. Its API is, like service worker, rather low-level. But it’s built for storing large(r) items and accessing them in a performant way. The right tool for this job.
I set out on implementing an IndexedDB-backed cache for chord charts. The finished product is chord-cache.ts: about 300 lines of code implementing various functionality of MessianicChords: searching, filtering, sorting chord charts.
Once implemented, I set out to make all my pages offline-aware
The home page with search box would be updated to search the cache if we’re offline, or send an search request to the API if we’re online
The artists page would be updated to query the cache if we’re offline, or query the API if we’re online
…and so on
Except this is quite redundant. I realized, “Why am I coding this up for every page? Can we hide this behind a service?”
Good old object-oriented programming to the rescue. Since all API requests were made through my chord-service.ts, I changed that class’s behavior to be cache-aware and offline-aware. The following diagram explains the change:
Sorry for the poor man’s diagram, but you get the picture. I made chord-service.ts call a ChordBackend interface. That interface has 2 implementations: one that hits our IndexedDB cache and another that hits our API. The former is used when we’re offline, the latter when we’re online.
This way, we don’t have to update any of our pages. The pages just talk to chord-service.ts like usual. Yay for polymorphism.
This means that only chord-service.ts needs to know when we’re offline. How does that work?
navigator.onLine and other lies
My first thought would be to use the built-in navigator.onLine API. There’s even an online/offline events paired with it to be notified when your online status changes. Perfect!
Except, these don’t really work in practice.
The thing is, “are you online?” isn’t a super easy question to answer. What I found was if my phone had zero bars out in podunk rural Iowa, I wasn’t really online, but navigator.onLine reported true. Gah!
I also saw weird things when testing offline via browser dev tools. I hit F12 -> Network -> Offline. Surely that would put us in offline mode, yes? Nope. Sometimes (not always?) navigator.onLine returned a false positive.
Even putting my iPhone in airplane mode was no guarantee navigator.onLine would give me a correct result. 😔
The documentation for navigator.onLine warns you about some of this:
In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true. So while you can assume that the browser is offline when it returns a false value, you cannot assume that a true value necessarily means that the browser can access the internet. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always “connected.” Therefore, if you really want to determine the online status of the browser, you should develop additional means for checking.
In Firefox and Internet Explorer, switching the browser to offline mode sends a false value. Until Firefox 41, all other conditions return a true value; testing actual behavior on Nightly 68 on Windows shows that it only looks for LAN connection like Chrome and Safari giving false positives.
“You should develop additional means for checking [online status].” 🙄
Yeah, that’s kinda what I had to do. I built online-detector.ts which basically just makes a no-op call to my API. If it fails, we’re offline.
Do I need to keep this offline status up-to-date?
Nah. For my purposes, we detect once and go from there. You need to reload the app to see a different offline status. That works for me. But if you need something better, you could periodically hit your API and fire an event as needed.
Pseudo full-text search with IndexedDB
The last challenge I encountered was full-text search. Now that we have our chord-cache.ts which caches chord charts, I could fetch them by name. But the name had to be exact.
Searching for “King” would not match the chord chart, “He is King“. That’s because of the way IndexedDB works. When querying an index, you can query by range or by exact value.
Query by range doesn’t work for my purposes. I could match everything up to “King” or everything after “King”, but not sentences that contain “King”.
Additionally, queries are case-sensitive by default.
To compensate for this, I created some additional indexes that stored all the words in the song title. “He is King” would store “he” and “king”. Kind of a poor man’s case-insensitive full-text search.
When the user queries for “King”, I convert it to lower case, then asynchronously query all my indexes for “king”. I feed all the results into a Set to remove duplicate results. Bingo, we have working(ish) offline search.’
HTML template reuse
When I viewed my service worker cache (F12 -> Application -> Cache Storage), I noticed an oddity: every chord chart route (e.g. https://messianicchords.com/ChordSheets/5697) had cached the same HTML template.
That’s because as a Single Page Application (SPA), we use an HTML template for all chord chart detail pages, asynchronously loading in the actual chord chart details.
Not a huge deal, but this means that if I cache 1000 chord charts, I’ll have the exact same HTML template in the service worker cache for each one. Wasteful.
Is there a way to tell our service worker cache, “Hey, if you come across /chords/123, use the same cached result from /chords/678”?
It turns out that yes, this is possible and is quite easy with Google Workbox custom plugins. Specifically, you can pass a function to Google Workbox’s various recipes to tell it cache keys to use. This lets me use the same cache key for all my chord chart details:
// Page cache recipe: https://developers.google.com/web/tools/workbox/modules/workbox-recipes#page_cache
pageCache({
plugins: [{
// We want to override cache key for
// - Artist page: /artist/Joe%20Artist
// - Chord details page: /ChordSheets/2630
// Reason is, these pages are the same HTML, just different behavior.
cacheKeyWillBeUsed: async function({request}) {
const isArtistPage = !!request.url.match(/\/artist\/[^\/]+$/);
if (isArtistPage) {
return new URL(request.url).origin + "/artist/_";
}
const chordDetailsRegex = new RegExp(/\/ChordSheets\/[\w|\d|-]+$/, "i");
const isChordDetailsPage = !!request.url.match(chordDetailsRegex);
if (isChordDetailsPage) {
return new URL(request.url).origin + "/ChordSheets/_"
}
return request.url;
}
}]
});
Here we’re using the Google Workbox pageCache recipe, which hits the network and falls back to the cache if the network is too slow to respond.
We pass a custom plugin (really, just a function) to the recipe. It defines a cacheKeyWillBeUsed function, which Workbox uses to determine cache key. In it, I say, “If we’re navigating to chord details page, just use “ChordSheets/_” as the cache key.”
I do the same for artist page, for the same reason.
End result is, we avoid hundreds or thousands of duplicates for chord details and artist pages.
Summary
It’s possible to build great offline web apps. For most apps, service worker will suffice.
For my purposes, I needed to go further: adding an IndexedDB for my web app to enable full offline support for dynamic functionality like searching, filtering, and sorting.
iframes pose a difficulty for making your app available offline, as they start a new browsing context unintercepted by your service worker. If you own the domain you’re iframing, you can still make it work. For apps like mine that iframe content on domains I don’t own (docs.google.com in my case), I had to workaround the issue by creating screenshots of documents and loading those while offline.
My app doesn’t let users create or update data, so I didn’t have to manage this while offline. But the web platform can handle that, too, via BackgroundSync.
Bottom line: making a PWA offline is entirely possible. I think it’s amazing I can write software that works online and offline whether on iOS, Android, Windows, Mac, and VR/AR devices, using just a single codebase built on web standards.
Spin up a RavenDB database quickly and cheaply. Create a highly-available database cluster in minutes. Try out the all new RavenDB Cloud for free at cloud.ravendb.net.
RavenDB Cloud is a new database-as-a-service from the creators of RavenDB. No need to download any software, futz with port forwarding or virtual machine management: just visit cloud.ravendb.net and spin up a RavenDB instance.
RavenDB itself is a distributed database: while it can run as a single server, Raven is designed to work well in a cluster, multiple instances of your database that sync to each other and keep your app running even if a database server goes down. RavenDB Cloud builds on this and makes it super simple to spin up a database cluster to make your app more scalable and resilient.
In this article, Iâll walk you through both. Weâll start by spinning up a basic, single node instance in RavenDB Cloud. Then Iâll show you how to spin up a full cluster. All the while, weâll be talking to our database from an ASP.NET Core web app. Letâs get started!
Spinning up a (free!) RavenDB Cloud instance
RavenDB Cloud offers a free instance. This is great for testing the waters and doing some dev hacking on Raven. I also use the free instance as my âlocalâ database during development; itâs super easy to spin up an instance in RavenDB Cloud and point my locally running apps at. Letâs do that now.
Youâll register with your email address and then youâll be asked what domain youâd like. This will be the URL through which youâll access your databases. For this CodeProject article, I decided on a fitting name :
The next step is optional: billing information. If youâre just playing around with the free instance, you can click âskip billing information.â Now weâre presented with the final summary of our information. Click âSign upâ and weâre ready to roll:
Now weâre registered and weâll receive our sign in link via email:
Iâve now got an email with a magic link that signs me in. Clicking that link takes me to my RavenDB Cloud dashboard:
Here weâll create a new product: our free RavenDB Cloud instance.
You might wonder: what do we mean by âproductâ here â is it just a single database? A product here really means a cloud server(s) in which one or more databases reside. So, for example, our free instance can have multiple databases inside of it, as weâll see shortly.
Weâll click âAdd Productâ and weâre asked what we want to create, with the default being the free instance:
If we change nothing on this screen, weâll create a free instance, which is perfect for our initial setup.
Before we move on, notice we can create an instance either in Amazonâs or Microsoftâs cloud. We can also choose the region, for example, AWS Canada, or Azure West US:
We can also choose the tier: Free, Development, or Production. For our first example here, weâre going to go with the free instance.
Itâs limited to a single node â no highly available cluster â10 GB of disk space, running on low-end hardware (2 vCPUs and 0.5 GB RAM). Thatâs fine for testing and for small projects; perfect for testing the waters. Weâll go ahead and choose the free instance and click Next.
Now we can specify the Display Name of the product; this is what weâll see on our dashboard. Optionally, you can limit access to your database by IP range. Raven databases are secure by default using client certificates â weâll talk about these more in a moment â so limiting access to an IP range isnât strictly necessary, but adds an additional layer of security. For now, Iâll leave the IP range opened to the world.
Weâll click Next to see the summary of our RavenDB Cloud product, then click Create.
Once we click Create, I can see the free instance on my dashboard:
Here you can see our free instance spinning up in AWS, with a yellow âCreatingâ status. After a moment, it will finish spinning up and youâll see the product go green in the Active state:
Congrats! You just spun up a free RavenDB Cloud instance.
We want to connect to this instance and create some databases. We can do that through code, but with RavenDB, we can also do it through the web using Ravenâs built-in tooling, Raven Studio. Youâll notice the URLs section of the instance: thatâs the URL that we can access our database server and create databases in.
But wait â isnât that a security risk? If you try it right now in your browser, going to https://a.free.clistctrl.ravendb.cloud, youâll be prompted for a security certificate. Where do you get the certificate? RavenDB Cloud generates one for you, and itâs available through the âDownload Certificateâ button:
Clicking âDownload Certificateâ will download a zip file containing a .pfx file â the certificate we need to access our database server:
(Yes, I really did pay for a registered copy of WinRAR)
Youâll see 2 .pfx files in there: one with a password, one without. Youâre free to use either, but for our purposes, weâre going to use the one without a password. Iâll double-click free.clistctrl.client.certificate.pfx and click Next all the way through until Iâm done; no special settings needed.
Once Iâve installed that certificate, I can now securely access my database using the URL listed in the dashboard:
Note: If you tried to access the URL before installing the certificate, you may run into an issue where your browser wonât prompt you for a certificate even after installing it. If that happens, simply restart your browser, or alternately, open the link in a private/incognito browser window.
Going to that URL in Chrome will prompt me to choose a certificate. Iâll choose the one we just installed, free.clistctrl. Hooray! Weâre connected to our free RavenDB Cloud instance:
What weâre looking at here is RavenDBâs built-in tooling, Raven Studio. You can think of Raven Studio akin to e.g. SQL Management Studio: itâs where we can create databases, view data in our databases, execute queries, etc.
Our first step is going to be creating a database. Iâm going to click Databases -> New database. Iâm going to name it Outlaws, which weâll use to store wonderful mythic outlaws of the wild west.
After clicking, we have a new database up and running in RavenDB Cloud â woohoo!
How does it look to connect to this database from, say, an ASP.NET Core web app? Itâs pretty simple, actually. Iâm going to do that now, just to show how simple it is.
While RavenDB has official clients for .NET, JVM, Go, Python, Node.js, and C++, Iâm most familiar with C# and .NET, and I think Raven shines brightest on .NET Core. So, Iâve created a new ASP.NET Core web app in Visual Studio, then I add the RavenDB.Client NuGet package.
Inside our StartUp.cs, I initialize our connection to Raven:
Thatâs it! We can now store stuff in our database:
Likewise, we can query for our objects easily:
Saving and querying is a breeze – if youâre new to Raven, I encourage you to check out the awesome learning materials to help you get started.
One final note here: you can spin up multiple databases inside your RavenDB Cloud product. In this case, weâve spun up a free instance and created a single Outlaws database inside it, but we can also can spin up other databases on this same free server as needed. Since the free tier supports 10GB disk space, we can spin up as many databases as can fit inside 10GB.
Spinning up a cluster in RavenDB Cloud
We just finished setting up a free instance in RavenDB Cloud, created a database, and connected to it, saved and queried some data.
All well and good.
But what happens when your database server goes down â does your app stop working? In our case, suppose AWS or Azure had a major outage, and our free instance goes offline. The result is that our app would stop working; it canât reach the database.
RavenDB is, at its core, a distributed database: itâs designed to run multiple copies of your database in a cluster. A cluster is 3 or more nodes â database server instances â in which all the databases sync with each other. If one node goes down, the others still work, and your app will automatically switch to one of the online nodes. We call this transparent failover. When the node comes back online, all the changes that happened while it was offline get automatically synced to it.
A wonderful part of all this is you donât have to do extra work to make this happen â you just setup your database as a cluster, and Raven takes care of the rest. The upside is your app is more resilient to failure: if one of your database nodes goes down, your app keeps working.
Letâs try that now using RavenDB Cloud.
Weâll go back to the RavenDB Cloud portal. We already have our CodeProjectFree product:
Letâs add a new product, weâll call it CodeProjectCluster. Iâll click Add Product like before, but this time, weâre going to specify Production tier, which will setup our database in a 3 node cluster:
Youâll notice above we set Tier level to Production â this will setup our database in a cluster. We can tweak the CPU priority, cluster size, and storage size as needed; for this example weâll leave these at the smallest sizes.
Weâll click next and set the instance names as before. Click finish, and weâre ready to roll: on our dashboard, we now see the cluster being created:
Notice that while our CodeProjectFree instance contains a single URL â thereâs only 1 node â our new CodeProjectCluster contains 3 URLs, each one being a node in the cluster. The first node is node A, so its URL is a.cluster.clistctrl.ravendb.cloud, the second node is node B with a corresponding URL, and so on.
Once the cluster is finished creating, Iâll download and install the certificate as before:
Even though we have 3 nodes, we have a single certificate that will work for all 3 nodes. Once Iâve downloaded and installed it, I can click on any of the node URLs. Letâs try the 2nd node, Node B, which is at https://b.cluster.clistctrl.ravendb.cloud. That URL takes me to Raven Studio for Node B:
Letâs go ahead and create a new database on this node. As before, Iâll click Databases -> New Database, and weâll call it OldWestHeroes:
Notice we now have a Replication factor of 3. This means our OldWestHeroes database will be replicated â copied and automatically synchronized â across all 3 nodes. Once we click Create, the database will be created and weâll see it on the node:
But since weâre running in a cluster, this database will also automatically be created on the other nodes. Notice under the database name, we see Node B, Node C, and Node A; Raven Studio is telling us this database is up and ready on all our 3 nodes.
Click the Manage group button, and we can see a visual description of our cluster topology:
On the right, we can see all 3 nodes are all replicating to each other. (If any nodes were offline, we would see the node here as red with a broken connection.)
This visual tells us our database is working on all 3 nodes in our cluster. It also shows ongoing tasks, such as automatic backups, hanging off the nodes responsible for the task. Youâll notice that âServer Wide Backupâ task is hanging off Node A â RavenDB Cloud creates this task for us automatically. Database backups are free for up to 1GB, and for $1/GB/month beyond that.
Weâre looking at Node B right now, but since all 3 nodes in our cluster are up and running, we should see the database on any of the other nodes.
Yep! Our OldWestHeroes database has been automatically created on this node. And because these nodes are automatically synchronized, any changes we make to one node will show up automatically on the other nodes.
Letâs try that out too. Here on Node A, Iâm going to click the OldWestHeroes database, then click New Document. Iâll create a new Cowboy document:
Iâll click save and our database now has a single Cowboy in it:
And because weâre in cluster, all the other nodes will now have this same document in it. Letâs head over to Node C, located at: https://c.cluster.clistctrl.ravendb.cloud:
Sure enough, our Cowboy document is there. I can edit this Cowboy and change his name, and of course those changes will be synced to all the other nodes.
How does that change our app code? Going back to our C# web app, does our code have to change?
Not much! The code is essentially the same as before, but instead of specifying a single URL, we specify the URLs of all the nodes in our cluster:
This one-time initialization code in our Startup.cs file is the only code that has to change. The rest of the app code doesnât change; we can still write objects and query them as usual:
Ditto for querying:
The upside for our app is even if some of the nodes in our cluster goes down â say, for instance, if thereâs an Azure outage â our app keeps working, transparently failing over to other nodes in the cluster. No extra code needed!
Summary
In this article, Iâve shown how to quickly spin up a free database in RavenDB Cloud. We showed how itâs secured with a certificate and how we can connect to it from a C# web app. Itâs quick and easy and great for testing the waters.
We also looked at something more production-ready: spinning up a 3 node cluster in RavenDB Cloud. We looked at how databases in the cluster are automatically synced with all the nodes. Any changes in one node are automatically and quickly replicated to the other nodes. We also looked at the minimal (2 additional lines) code required to change our web app from our free single-node instance to our 3 node cluster.
Apps running against a cluster are more resilient in the face of failure: your app keeps running even if some of the nodes go down. Raven allows reading and writing to any of the nodes of the cluster, keeping your app running in the face of hardware failure or network issues.
RavenDB Cloud lets you spin up a single RavenDB instance or full production cluster quickly and easily in the cloud. I hope this article has helped you understand what it is and why youâd use it, and I hope youâll give it try today: cloud.ravendb.net
Summary: How to use TypeScript async/await with AngularJS 1.x apps, compiling down to ES5 browsers.
With TypeScript 2.1+, you can start using the awesome new async/await functionality today, even if your users are running old browsers. TypeScript will compile it down to something all browsers can run.
Iâm using Angular 1.x for many of my apps. I wanted to use the sexy new async/await functionality in my Angular code. I didnât find any examples online how to do this, so I did some experimenting and figured it out.
For the uninitiated, async/await is a big improvement on writing clean async code:
Getting this to work with Angular is pretty simple, requiring only a single step.
1. Use $q for Promise
Since older browsers may not have a global Promise object, we need to drop in a polyfill. Fortunately, we can just use Angularâs $q object as the Promise, as itâs A+ compatible with the Promise standard.
This kills two birds with one stone: we now have a Promise polyfill, and when these promises resolve, the scope will automatically be applied.
2. Youâre done! Sort ofâŠ
Thatâs actually enough to start using async/await against Promise-based code, such as ng.IPromise<T>:
Cool. Weâre cooking with gas. ExceptâŠ
Making it cleaner.
If you look at the transpiled javascript, youâll see that TypeScript is generating 2 big helper functions at the top of every file that uses an async function:
Yikes! Sure, this is how the TypeScript compiler is working its magic: simulating async/await on old platforms going back to IE8 (and earlier?).
Love the magic, but hate the duplication; weâre generating this magic for every TS file that uses async functions. Ideally, weâd just generate the magic once, and have all our async functions reuse it.
We can do just that, explained in steps 3 and 4 below.
3. Use noEmitHelpers TS compiler flag
The TypeScript 2.1+ compiler supports the noEmitHelpers flag. This will isntruct TypeScript not to emit any of its helpers: not for async, not for generators, not for class inheritance, âŠnuttinâ.
Letâs start with that. In my tsconfig.json file, I add the flag:
You can see weâve set noEmitHelpers to true in line 8. Now if we compile our app, youâll notice the transpiled UsersController.js (and your code files that use async functions) no longer has all the magic transpiler stuff. Instead, youâll notice your async functions are compiled down to something like this:
Ok â that actually looks fairly clean. Except if you run it, youâll get an error saying __awaiter is undefined. And thatâs because we just told TypeScript to skip generating the __awaiter helper function.
Instead of having TypeScript compiler generate that in each file, weâre just going to define those magic helper functions once.
4. Use TsLib.js to define the magic helper functions once.
Microsoft maintains tslib, the runtime helpers library for TypeScript apps. Itâs all contained in tslib.js, single small file (about 200 lines of JS) that defines all helper functions TypeScript can emit. I added this file to my project, and now all my async calls work again.
Alternately, you can tell the TypeScript compiler to do that for you using the importHelpers flag.
Summary: RavenDB opens up some new possibilities for working with view models: objects that contain pieces of data from other objects. With Raven, we can use .Include to simulate relational joins. But we can go beyond that for superior performance by exploiting Raven’s transformers, custom indexes, and even complex object storage.
Modern app developers often work with conglomerations of data cobbled together to display a UI page. For example, you’ve got a web app that displays tasty recipes, and on that page you also want to display the author of the recipe, a list of ingredients and how much of each ingredient. Maybe comments from users on that recipe, and more. We want pieces of data from different objects, and all that on a single UI page.
For tasks like this, we turn to view models. A view model is a single object that contains pieces of data from other objects. It contains just enough information to display a particular UI page.
In relational databases, the common way to create view models is to utilize multiple JOIN statements to piece together disparate data to compose a view model.
But with RavenDB, we’re given new tools which enable us to work with view models more efficiently. For example, since we’re able to store complex objects, even full object graphs, there’s less need to piece together data from different objects. This opens up some options for efficiently creating and even storing view models. Raven also gives us some new tools like Transformers that make working with view models a joy.
In this article, we’ll look at a different ways to work with view models in RavenDB. I’ll also give some practical advice on when to use each approach.
The UI
We’re building a web app that displays tasty recipes to hungry end users. In this article, we’ll be building a view model for a UI page that looks like this:
At first glance, we see several pieces of data from different objects making up this UI.
Name and image from a Recipe object
List of Ingredient objects
Name and email of a Chef object, the author of the recipe
List of Comment objects
List of categories (plain strings) to help users find the recipe
A naive implementation might query for each piece of data independently: a query for the Recipe object, a query for the Ingredients, and so on.
This has the downside of multiple trips to the database and implies performance overhead. If done from the browser, we’re looking at multiple trips to the web server, and multiple trips from the web server to the database.
A better implementation makes a single call to the database to load all the data needed to display the page. The view model is the container object for all these pieces of data. It might look something like this:
How do we populate such a view model from pieces of data from other objects?
How we’ve done it in the past
In relational databases, we tackle this problem using JOINs to piece together a view model on the fly:
It’s not particularly beautiful, but it works. This pseudo code could run against some object-relational mapper, such as Entity Framework, and gives us our results back.
However, there are some downsides to this approach.
Performance: JOINs and subqueries often have a non-trivial impact on query times. While JOIN performance varies per database vendor, per the type of column being joined on, and whether there are indexes on the appropriate columns, there is nonetheless a cost associated with JOINs and subqueries. Queries with multiple JOINs and subqueries only add to the cost. So when your user wants the data, we’re making him wait while we perform the join.
DRY modeling: JOINs often require us to violate the DRY (Don’t Repeat Yourself) principle. For example, if we want to display Recipe details in a different context, such as a list of recipe details, we’d likely need to repeat our piece-together-the-view-model JOIN code for each UI page that needs our view model.
Can we do better with RavenDB?
Using .Include
Perhaps the easiest and most familiar way to piece together a view model is to use RavenDB’s .Include.
In the above code, we make a single remote call to the database and load the Recipe and its related objects.
Then, after the Recipe returns, we can call session.Load to fetch the already-loaded related objects from memory.
This is conceptually similar to a JOIN in relational databases. Many devs new to RavenDB default to this pattern out of familiarity.
Better modeling options, fewer things to .Include
One beneficial difference between relational JOINs and Raven’s .Include is that we can reduce the number of .Include calls due to better modeling capabilities. RavenDB stores our objects as JSON, rather than as table rows, and this enables us to store complex objects beyond what is possible in relational table rows. Objects can contain encapsulated objects, lists, and other complex data, eliminating the need to .Include related objects.
For example, logically speaking, .Ingredients should be encapsulated in a Recipe, but relational databases don’t support encapsulation. That is to say, we can’t easily store a list of ingredients per recipe inside a Recipe table. Relational databases would require us to split a Recipe’s .Ingredients into an Ingredient table, with a foreign key back to the Recipe it belongs to. Then, when we query for a recipe details, we need to JOIN them together.
But with Raven, we can skip this step and gain performance. Since .Ingredients should logically be encapsulated inside a Recipe, we can store them as part of the Recipe object itself, and thus we don’t have to .Include them. Raven allows us to store and load Recipe that encapsulate an .Ingredients list. We gain a more logical model, we gain performance since we can skip the .Include (JOIN in the relational world) step, and our app benefits.
Likewise with the Recipe’s .Categories. In our Tasty Recipes app, we want each Recipe to contain a list of categories. A recipe might contain categories like [“italian”, “cheesy”, “pasta”]. Relational databases struggle with such a model: we’d have to store the strings as a single delimited string, or as an XML data type or some other non-ideal solution. Each has their downsides. Or, we might even create a new Categories table to hold the string categories, along with a foreign key back to their recipe. That solution requires an additional JOIN at query time when querying for our RecipeViewModel.
Raven has no such constraints. JSON documents tend to be a better storage format than rows in a relational table, and our .Categories list is an example. In Raven, we can store a list of strings as part of our Recipe object; there’s no need to resort to hacks involving delimited fields, XML, or additional tables.
RavenDB’s .Include is an improvement over relational JOINs. Combined with improved modeling, we’re off to a good start.
So far, we’ve looked at Raven’s .Include pattern, which is conceptually similiar to relational JOINs. But Raven gives us additional tools that go above and beyond JOINs. We discuss these below.
Transformers
RavenDB provides a means to build reusable server-side projections. In RavenDB we call these Transformers. We can think of transformers as a C# function that converts an object into some other object. In our case, we want to take a Recipe and project it into a RecipeViewModel.
Let’s write a transformer that does just that:
In the above code, we’re accepting a Recipe and spitting out a RecipeViewModel. Inside our Transformer code, we can call .LoadDocument to load related objects, like our .Comments and .Chef. And since Transformers are server-side, we’re not making extra trips to the database.
Once we’ve defined our Transformer, we can easily query any Recipe and turn it into a RecipeViewModel.
This code is a bit cleaner than calling .Include as in the previous section; there are no more .Load calls to fetch the related objects.
Additionally, using Transformers enables us to keep DRY. If we need to query a list of RecipeViewModels, there’s no repeated piece-together-the-view-model code:
Storing view models
Developers accustomed to relational databases may be slow to consider this possibility, but with RavenDB we can actually store view models as-is.
It’s certainly a different way of thinking. Rather than storing only our domain roots (Recipes, Comments, Chefs, etc.), we can also store objects that contain pieces of them. Instead of only storing models, we can also store view models.
This technique has benefits, but also trade-offs:
Query times are faster. We don’t need to load other documents to display our Recipe details UI page. A single call to the database with zero joins – it’s a beautiful thing!
Data duplication. We’re now storing Recipes and RecipeViewModels. If an author changes his recipe, we may need to also update the RecipeViewModel. This shifts the cost from query times to write times, which may be preferrable in a read-heavy system.
The data duplication is the biggest downside. We’ve effectively denormalized our data at the expense of adding redundant data. Can we fix this?
Storing view models + syncing via RavenDB’s Changes API
Having to remember to update RecipeViewModels whenever a Recipe changes is error prone. Responsibility for syncing the data is now in the hands of you and the other developers on your team. Human error is almost certain to creep in — someone will write new code to update Recipes and forget to also update the RecipeViewModels — we’ve created a pit of failure that your team will eventually fall into.
We can improve on this situation by using RavenDB’s Changes API. With Raven’s Changes API, we can subscribe to changes to documents in the database. In our app, we’ll listen for changes to Recipes and update RecipeViewModels accordingly. We write this code once, and future self and other developers won’t need to update the RecipeViewModels; it’s already happening ambiently through the Changes API subscription.
The Changes API utilizes Reactive Extensions for a beautiful, fluent and easy-to-understand way to listen for changes to documents in Raven. Our Changes subscription ends up looking like this:
Easy enough. Now whenever a Recipe is added, updated, or deleted, we’ll get notified and can update the stored view model accordingly.
Indexes for view models: let Raven do the hard work
One final, more advanced technique is to let Raven do the heavy lifting in mapping Recipes to RecipeViewModels.
A quick refresher on RavenDB indexes: in RavenDB, all queries are satisfied by an index. For example, if we query for Recipes by .Name, Raven will automatically cretate an index for Recipes-by-name, so that all future queries will return results near instantly. Raven then intelligently manages the indexes it’s created, throwing server resources behind the most-used indexes. This is one of the secrets to RavenDB’s blazing fast query response times.
RavenDB indexes are powerful and customizable. We can piggy-back on RavenDB’s indexing capabilities to generate RecipeViewModels for us, essentially making Raven do the work for us behind the scenes.
First, let’s create a custom RavenDB index:
In RavenDB, we use LINQ to create indexes. The above index tells RavenDB that for every Recipe, we want to spit out a RecipeViewModel.
This index definition is similiar to our transformer definition. A key difference, however, is that the transformer is applied at query time, whereas the index is applied asynchronously in the background as soon as a change is made to a Recipe. Queries run against the index will be faster than queries run against the transformer: the index is giving us pre-computed RecipeViewModels, whereas our transformer would create the RecipeViewModels on demand.
Once the index is deployed to our Raven server, Raven will store a RecipeViewModel for each Recipe.
Querying for our view models is quite simple and we’ll get results back almost instantaneously, as the heavy lifting of piecing together the view model has already been done.
Now whenever a Recipe is created, Raven will asynchronously and intelligently execute our index and spit out a new RecipeViewModel. Likewise, if a Recipe, Comment, or Chef is changed or deleted, the corresponding RecipeViewModel will automatically be updated. Nifty!
Storing view models is certainly not appropriate for every situation. But some apps, especially read-heavy apps with a priority on speed, might benefit from this option. I like that Raven gives us the freedom to do this when it makes sense for our apps.
Conclusion
In this article, we looked at using view models with RavenDB. Several techniques are at our disposal:
.Include: loads multiple related objects in a single query.
Transformers: reusable server-side projections which transform Recipes to RecipeViewModels.
Storing view models: Essentially denormalization. We store both Recipes and RecipeViewModels. Allows faster read times at the expense of duplicated data.
Storing view models + .Changes API: The benefits of denormalization, but with code to automatically sync the duplicated data.
Indexes: utilize RavenDB’s powerful indexing to let Raven denormalize data for us automatically, and automatically keeping the duplicated data in sync. The duplicated data is stashed away as fields in an index, rather than as distinct documents.
For quick and dirty scenarios and one-offs, using .Include is fine. It’s the most common way of piecing together view models in my experience, and it’s also familiar to devs with relational database experience. And since Raven allows us to store things like nested objects and lists, there is less need for joining data; we can instead store lists and encapsulated objects right inside our parent objects where it makes sense to do so.
Transformers are the next widely used. If you find yourself converting Recipe to RecipeViewModel multiple times in your code, use a Transformer. They’re easy to write, typically small, and familiar to anyone with LINQ experience. Using them in your queries is a simple one-liner that keeps your query code clean and focused.
Storing view models is rarely used, in my experience, but it can come in handy for read-heavy apps or for UI pages that need to be blazing fast. Pairing this with the .Changes API is an appealing way to automatically keep Recipes and RecipeViewModels in sync.
Finally, we can piggy-back off Raven’s powerful indexing feature to have Raven automatically create, store, and synchronize both RecipeViewModels for us. This has a touch of magical feel to it, and is an attractive way to get great performance without having to worry about keeping denormalized data in sync.
Using these techniques, RavenDB opens up some powerful capabilities for the simple view model. App performance and code clarity benefit as a result.
Summary: A modern dev stack for modern web apps. See how I built a new web app using RavenDB, Angular, Bootstrap and TypeScript. Why these tools are an excellent choice for modern web dev.
Twin Cities Code Camp (TCCC) is the biggest developer event in Minnesota. Iâve written about it before: itâs a free event where 500 developers descend on the University of Minnesota to attend talks on software dev, learn new stuff, have fun, and become better at the software craft.
I help run the event and this April, we are hosting our 20th event. 20 events in 10 years. Yes, weâve been putting on Code Camps for a decade! Thatâs practically an eternity in internet years.
For the 20th event, we thought it was time to renovate the website. Our old website had been around since about 2006 â a decade old â and the old site was showing its age:
It got the job done, but admittedly itâs not a modern site. Rather plain Jane; it didnât reflect the awesome event that Code Camp is. We wanted something fresh and new for our 20th event.
On the dev side, everything on the old site was hard-coded â no database â meaning every time we wanted to host a new event, add speakers or talks or bios, we had to write code and add new HTML pages. We wanted something modern, where events and talks and speakers and bios are all stored in a database that drives the whole site.
Dev Stack
Taking at stab at rewriting the TCCC websiite, I wanted to really make it a web app. That is, I want it database-driven, I want some dynamic client-side functionality; things like letting speakers upload their slides, letting attendees vote on talks, having admins login to add new talks, etc. This requires something more than a set of static web pages.
Additionally, most of the people attending Code Camp will be looking at this site on their phone or tablet. I want to build a site that looks great on mobile.
To build amazing web apps, I turn to my favorite web dev stack:
RavenDB â the very best get-shit-done database. Forget tables and sprocs and schemas. Just store your C# objects without having to map them to tables and columns and rows. Query them with LINQ when youâre ready.
AngularJS â front-end framework for building dynamic web apps. Transforms your website from a set of static pages to a coherent web application with client-side navigation, routing, automatic updates with data-binding, and more awesomeness. Turns this:
Bootstrap â CSS to make it all pretty, make it consistent, and make it look great on mobile devices. Turns this: âŠinto this:
TypeScript â JavaScript extended with optional types, classes, and features from the future. This lets me build clean, easily refactored, maintainable code that runs in your browser. So instead of this ugly JavaScript code:âŠwe instead write this nice modern JavaScript + ES6 + types code, which is TypeScript:
ASP.NET Web API â Small, clean RESTful APIs in elegant, asynchronous C#. Your web app talks to these to get the data from the database and display the web app.
FontAwesome â Icons for your app. Turns this: into this:
I find these tools super helpful and awesome and Iâm pretty darn productive with them. Iâve used this exact stack to build all kinds of apps, both professional and personal:
And a bunch of internal apps at my current employer, 3M, use this same stack internally. I find this stack lets me get stuff done quickly without much ceremony and I want to tell you why this stack works so well for me in this post.
RavenDB
Iâm at the point in my career that I donât put up with friction. If there is friction in my workflow, it slows me down and I donât get stuff done.
RavenDB is a friction remover.
My old workflow, when I was a young and naĂŻve developer, went like this:
Hmm, I think I need a database.
Well, Iâd guess Iâd better create some tables.
I should probably create columns with the right types for all these tables.
Now I need to save my C# objects to the database. I guess Iâll use an object-relational mapper, like LINQ-to-SQL. Or maybe NHibernate. Or maybe Entity Framework.
Now Iâve created mappings for my C# objects to my database. Of course, I have to massage those transitions; you canât really do inheritance or polymorphism in SQL. Heck, my object contains a List<string>, and even something that simple doesnât map well to SQL. (Do I make those list of strings its own table with foreign key? Or combine them into a comma-separated single column, and rehydrate them on query? OrâŠugh.)
Hmm, why is db.Events.Where(âŠ) taking forever? Oh, yeah, I didnât add any indexes. Crap, itâs doing a full table scan. Whatâs the syntax for that again?
This went on and on. It wasnât until I tried RavenDB did I see that all this friction is not needed.
SQL databases were built for a different era in which disk space was at a premium; hence normalization and varchar and manual indexes. This comes at a cost: big joins, manual denormalization for speed. Often times on big projects, we have DBAs building giant stored procedures with all kinds of temp tables and weird optimization hacks to make shit fast.
Forget all that.
With Raven, you just store your stuff. Hereâs how I store a C# object that contains a list of strings:
db.Store(codeCampTalk);
Notice I didnât have to create tables. Notice I didnât have to create foreign key relationships. Notice I didnât have to create columns with types in tables. Notice I didnât have to tell it how to map codeCampTalk.Tags â a list of strings â to the database.
Raven just stores it.
And when weâre ready to query, it looks like this:
Notice I didnât have to do any joins; unlike SQL, Raven supports encapsulation. Whether thatâs a list of strings, a single object inside another object, or a full list of objects. Honey Badger Raven donât care.
And notice I didnât have to create any indexes. Raven is smart about this: it creates an index for every query automatically. Then it uses machine learning â a kind of AI for your database â to optimize the ones you use the most. If Iâm querying for Talks by .Author, Raven keeps that index hot for me. But if I query for Talks by .Bio infrequently, Raven will devote server resources â RAM and CPU and disk â to more important things.
Itâs self optimizing. And itâs frigginâ amazing.
The end result is your app remains fast, because Raven is responding to how itâs being used and optimizing for that.
And I didnât have to do anything make that happen. I just used it.
Zero friction. I love RavenDB.
If youâre doing .NET, there really is no reason you shouldnât be using it. Having built about 10 apps in the last 2 years, both professional and side projects, I have not found a case where Raven is a bad fit. Iâve stopped defaulting to crappy defaults. Iâve stopped defaulting to SQL and Entity Framework. Itâs not 1970 anymore. Welcome to modern, flexible, fast databases that work with you, reduce friction, work with object oriented languages and optimize for todayâs read-heavy web apps.
AngularJS
In the bad olâ days, weâd write front-end code like this:
JavaScript was basically used to wire up event handlers. And weâd do some postback to the server, which reloaded the page in the browser with the new email address. Yay, welcome to 1997.
Then, we discovered JQuery, which was awesome. We realized that the browser was fully programmable and JQuery made it a joy to program it. So instead of doing postbacks and having static pages, we could just update the DOM, and the user would see the new email address:
And that was good for users, because the page just instantly updated. Like apps do. No postback and full page refresh; they click the button and instantly see the results.
This was well and good. Until our code was pretty ugly. I mean, look at it. DOM manipulation isnât fun and itâs error prone. Did I mention it was ugly?
What if we could do something like this:
Whoa! Now anytime we update a JavaScript variable, .emailAddress, the DOM instantly changes! Magic!
No ugly DOM manipulation, we just change variables and the browser UI updates instantly.
This is data-binding, and when we discovered it in the browser, all kinds of frameworks popped up that let you do this data-binding. KnockoutJS, Ember, Backbone, and more.
This was all well and good until we realized that while data-binding is great, it kind of sucks that we still have full page reloads when moving from page to page. The whole app context is gone when the page reloads.
What if we could wrap up the whole time a user is using our web app into a cohesive set of views which the user navigates to without losing context. And instead of a mess of JavaScript, what if we made each view have its own class. That class has variables in it, and the view data-binds to those variables. And that class, weâll call it a controller, loads data from the server using classes called services. Now weâre organized and cooking with gas.
Enter AngularJS. Angular makes it a breeze to build web apps with:
Client-side navigation. This means as the app moves between pages, say between the Schedule and Talks pages, your app hasnât been blown away. You still have all the variables and state and everything still there.
Data-binding. You put your variables in a class called a controller: âŠand then in your HTML, you data-bind to those variables: Then, any time you change the variables, the DOM â that is, the web browser UI, automatically updates. Fantastic.
Angular also add structure. You load data using service classes. Those services classes are automatically injected into your controllers. Your controllers tell the service classes to fetch data. When the data returns, you set the variable in your controller, and the UI automatically updates to show the data:
Nice clean separation of concerns. Makes building dynamic apps â apps where the data changes at runtime and the UI automatically shows the new data â a breeze.
TypeScript
JavaScript is a powerful but messy language. Conceived in a weekend of wild coding in the late â90s, it was built for a different era when web apps didnât exist.
TypeScript fixes this. TypeScript is just JavaScript + optional types + features from the future. Where features from the future = ES6, ES7, and future proposals â things that will eventually be added to JavaScript, but you wonât be able to use for 5, 10, 15 years. You can use them right now in TypeScript.
TypeScript compiles it all down to normal JavaScript that runs in everybodyâs browser. But it lets you code using modern coding practices like classes, lambdas, properties, async/await, and more. Thanks to types, it enables great tooling experiences like refactoring, auto-completion, error detection.
So instead of writing ugly JavaScript like this:
We can instead write concise and clean, intellisense-enabled TypeScript like this:
AhhhâŠlambdas, classes, properties. Beautiful. All with intellisense, refactoring, error detection. I love TypeScript.
There are few reasons to write plain JavaScript today. Itâs beginning to feel a lot like writing assembly by hand; ainât nobody got time for that. Use modern language features, use powerful tooling to help you write correct code and find errors before your users do.
Bootstrap
You donât need to drink $15 Frappamochachino Grandes to design elegant UIs.
Weâve got code at our disposal that gives us a nice set of defaults, using well-known UI concepts and components to build great interfaces on the web.
Bootstrap, with almost no effort, makes plain old HTML into something more beautiful.
A plain old HTML table:
Add a Bootstrap class to the <table> and itâs suddenly looking respectable:
A plain HTML button:
Add one of a few button classes and things start looking quite good:
Bootstrap gives you a default theme, but you can tweak the existing theme or use some pre-built themes, like those at Bootswatch. For TwinCitiesCodeCamp.com, I used the free Superhero theme and then tweaked it to my liking.
Bootstrap also gives you components, pieces of combined UI to build common UI patterns. For example, here is a Bootstrap split-button with drop-down, a common UI pattern:
Bootstrap enables these components using plain HTML with some additional CSS classes. Super easy to use.
Bootstrap also makes it easy to build responsive websites: sites that look good on small phones, medium tablets, and large desktops.
Add a few classes to your HTML, and now your web app looks great on any device. For TwinCitiesCodeCamp, we wanted to make sure the app looks great on phones and tablets, as many of our attendees will be using their mobile devices at the event.
Hereâs TwinCitiesCodeCamp.com on multiple devices:
Large desktop:
iPad and medium tablets:
And on iPhone 6 and small phones:
This is all accomplished by adding a few extra CSS classes to my HTML. The classes are Bootstrap responsive classes that adjust the layout of your elements based on available screen real-estate.
Summary
RavenDB, AngularJS, TypeScript, Bootstrap,. Itâs a beautiful stack for building modern web apps.
Last week I sent the dreaded, âIâm going out of businessâ email to clients of my BitShuva Radio startup:
An unintentional startup
A few years ago, I wrote a piece of software to solve a niche problem: the Messianic Jewish religious community had a lot of great music, but no online services were playing that music. I wrote a Pandora-like music service that played Messianic Jewish music, Chavah Messianic Radio was born, and itâs been great. (Chavah is still doing very well to this day; Google Analytics tells me itâs had 5,874 unique listeners this month â not bad at all!)
After creating Chavah, I wrote a programming article about the software: How to Build a Pandora Clone in Silverlight 4. (At the time, Silverlight was the hotness! Iâve since ported Chavah to HTML5, but I digress.)
Once that article was published, several people emailed me asking if Iâd build a radio station for them. One after another. Turns out there many underserved music niches. Nigerian music. West African soul. Egyptian Coptic chants. Indie artists. Instrumentals. Ethiopian pop. A marketplace for beats. Local bands from central Illinois. All these clients came out of the woodwork, asking me to build clones of my radio station for their communities.
After these clients approached me â with no marketing or sales pitches on my part â it looked like a good business opportunity. I founded BitShuva Radio and got to work for these clients. I had founded a startup.
But after almost 2 years, making less than $100/month on recurring monthly fees, and spending hours every week working for peanuts, Iâve decided to fold the startup. It wasnât worth my time, it was eating into my family life, preventing me from working on things I really wanted to work on. So this week, I cut the cord.
Along the way, I learned so much! Maybe this will help the next person who does their first startup.
Hereâs what I learned:
1. Donât be afraid to ask for a *lot* of money.
When I acquired my first client, I had no idea how much to charge. For me, the work involved forking an existing codebase, swapping out some logos and colors, and deploying to a web server. A few hours of work.
I dared to ask for the hefty sum of $75.
Yes, I asked for SEVENTY-FIVE WHOLE DOLLARS! I remember saying that figure to the man on the other end of the phone â what a thrill! â $75 for forking a codebase, ha! To my surprise, he agreed to this exorbitant charge.
In my startup newbie mind, $75 seemed totally reasonable for forking a codebase and tweaking some CSS. After all, itâs not that much work.
What I didnât understand was, you charge not for how much work it is for you. You charge how much the service is worth. A custom Pandora-like radio station, with thumb-up and âdown functionality, song requests, user registration, playing native web audio with fallbacks to Flash for old browsers â creating a community around a niche genre of music â thatâs what you charge for. That’s the value being created here. The client doesnât care if itâs just forking a codebase and tweaking CSS â to him, itâs a brand new piece of software with his branding and content. He doesnât know what code, repo forking, or CSS is. All he knows is heâs getting a custom piece of software doing exactly what he wants. And thatâs worth a lot more than $75.
It took me several clients to figure this out. My next client, I tried charging $100. He went for it. The next client $250. The next client $500. Then $1000.
I kept charging more and more until finally 3 clients all turned down my $2000 fee. So I lowered the price back to $1000.
Money is just business. Itâs not insulting to ask for a lot of money. Change as much as you can. Had I knew this when I started, Iâd have several thousand dollars more in my pocket right now.
2. Keep your head above the ever-changing technology waters
Don’t drown!
When I built my first the first radio software, Silverlight seemed like a reasonable choice. HTML5 audio was nascent, Firefox didnât support native MP3 audio, IE9 was still a thing. So I turned to plugins.
Over time, plugins like Silverlight fell out of favor, particularly due to the mobile explosion. Suddenly, everyoneâs trying to run my radio software on phones and tablets, and plugins donât work there, so I had to act.
I ported my radio software code to HTML5, with Flash fallbacks for old browsers. KnockoutJS was the the new hotness, so I moved all our Silverlight code to HTML5+CSS3+KnockoutJS.
As the software grew in complexity, it became apparent you really need something more than data-binding, but Knockout was just data-binding. Single Page Application (SPA) frameworks became the new hotness, and I ported our code over to DurandalJS.
Soon, Durandal was abandoned by its creator, and said creator joined the AngularJS team at Google. Not wanting to be left on a dying technology, I ported the code to Angular 1.x.
If I was continuing my startup today, I’d be looking at riding that wave and moving to Aurelia or Angular 2.
What am I saying? Staying on top of the technology wave is a balancing act: stand still and you’ll be dead within a year, but move to every new hotness, and you’ll be forever porting your code and never adding new features. My advice is to be fiscally pragmatic about it: if your paying clients have a need for new technology, migrate. Otherwise, use caution and a wait-and-see approach.
Applying this wisdom in hindsight to my startup, it was wise to move from Silverlight to HTML5 (my paying clients needed that to run on mobile). However, jumping around from Knockout to Durandal to Angular did little for my clients. I should have used more caution and used a wait-and-see approach.
3. Custom software is a fool’s errand. Build customizable software, not custom software builds
My startup grew out of clients asking for custom versions of my radio software. “Oh, you have a Pandora clone? Can you make one for my music niche?”
Naturally, I spent most of my time building custom software. They pay me a nice up-front sum ($1000 in the latter days), and we go our merry way, right?
Turns out, it’s a terrible business model. Here’s why:
Clients continually want more features, bug fixes, more customization. I charged that $1000 up-front fee to build a custom station, but then would spend many hours every week responding to customer complaints, customer requests, bug fixes, performance fixes, new features. And I didn’t charge a dime for that. (After all, the client’s perspective was, “I already paid you!”)
In hindsight, I should have built a customizable platform, ala WordPress, in which potential radio clients could go to my website, bitshuva.com, spin up a new radio station (mystation.bitshuva.com), customize it in-browser, let them use the whole damn thing for free, and when they reach a limit of songs or bandwidth, bring up a PayPal prompt. All of that is automated, it doesn’t require my intervention, and it’s not “custom” software, it’s software that the client themselves can customize to their OCDified heart’s content.
Had I done that, my startup probably would be making more money today, maybe even sustainably so.
Bottom line: Unless a client is paying for 25% of your annual salary, don’t go follow the “I’ll build a custom version just for you, dear client” business model. It’s a fool’s errand.
4. On Saying “No”
I’m a people-pleaser. So, when a person pays me money, I amplify that people pleasing by 10.
“Hey, Judah, can you add XYZ to my radio station this week?”
“Judah! Buddy! Did you fix that one thing with the logins?”
“How’s it going, Judah! Where is that new feature we talked about?”
“Hey Judah, hope it’s going well. When are you going to finish my radio station features? I thought we were on for last week.”
I wanted to please my precious clients. So of course I said “yes”. I said yes, yes, yes, until I had no time left for myself, my sanity, my family.
A turning point for me for over late December, at my in-laws. I was upstairs working, rather than spending the holidays with my kids, my wife. “What the hell am I doing?” The amount of money I was making was small beans, why am I blowing my very limited time on this earth doing *this*?
You see why folks in the YCombinator / Silicon Valley startup clique put so much emphasis on, “You should be young, single, work exclusively on your startup, all-in committal.” I can totally see why, but I also completely don’t want that lifestyle.
Maybe if I had followed YCombinator-level devotion to my startup, it would have grown. But the reality is, I value things outside of software, too. đ I like to chill and watch shows and eat ice cream. I like to relax on the couch with my wife. I like to teach my son how to drive. I like to play My Little Ponies with my daughter. I like to play music on the guitar. I like to work on tech pet projects (like Chavah, MessianicChords, EtzMitzvot).
The startup chipped away at all that, leaving me with precious little time outside of the startup.
5. Startups force you to learn outside your technological niche
On a more positive note, running a startup taught me all kinds of things I would have never learned as a plain old programmer.
When I launched my startup, I was mostly a Windows desktop app developer (i.e. dead in the water). I didn’t know how to run websites in IIS, how to work with DNS, how to scale things, didn’t understand web development. I didn’t know how to market software, how to talk to clients, what prices to charge, didn’t have an eye for “ooh, I could monetize that…”
Building a useful piece of software — a radio station used by a few thousand people — forces you to learn all this crap and become proficient and building useful things.
In the end, getting all retrospective and and hindsight-y here, I’m glad I took the plunge and did a startup, even if it didn’t work out financially, because I learned so much and am a more rounded technological individual for it. Armed with all this knowledge, I believe I will try my hand at a startup again in the future. For now, I’m going to enjoy my temporary freedom. đ
Summary: With the departure of Microsoft’s CEO, what does the future hold? Irrelevance, unless a visionary comes to change course.
Microsoft’s original vision — a PC on every desk and in every home — was a grand, future-looking vision. And Microsoft succeeded, that old vision is todayâs reality; everyone has a computer and Microsoft is largely to thank for that.
But today? Microsoft’s Ballmer-guided mantra, "We are a devices and services company", is not a grand vision. From the outside, Microsoft appears to be directionless, reactionary, playing catch-up.
Directionless: What’s the grand Microsoft goal, what are they trying to achieve? The answers seems to be the mundane business goal of selling more copies of Windows. OK, that makes business sense in the short term. What about the future?
Reactionary: Microsoft got a PC on every desk. But instead of pushing computing forward via the web & mobile devices, they’ve been reactionary: letting these revolutions happen outside the company, then retrofitting their old stuff to the new paradigm.
Catch-up: Microsoft had a PDA, but never advanced it; it couldnât make phone calls. Microsoft won the browser war, then did nothing; it couldnât open multiple tabs. Microsoft had a tablet, but never pushed it to its potential; it never optimized for touch.
Instead, Microsoft stagnates while a competitor steps in and blows us away with PDAs that make phone calls, tablets that boot instantly, app stores that reward developers for developing on your platform, and browsers that innovate in speed and security and features. Microsoft continues to play catch-up, when they should be leading technology forward.
Microsoft needs a grand vision and someone to drive it. They need a forward-looking leader to drive this vision. If they want to be a devices company, innovate with hardware – maybe flexible, haptic displays for Windows Phone, for example. The huge R&D budget — $9.4 billion in 2012, outspending even Google, Apple, Intel and Oracle — could play into this.
Will the next Microsoft CEO be a forward-looking tech visionary? Microsoft is headed towards consumer irrelevance and business stagnation. I’m convinced it will arrive at that destination unless a future-minded visionary reroutes the mothership.
It may sound grandiose, but itâs essentially true: developers have a superpower. Weâre the inventors of the modern age. We have a unique power that is new to humanity: we can build useful things and instantly put a thousand eyeballs on it. All for about $0 and very little time investment.
(My startup company, BitShuva internet radio, was the product of about a weekendâs work, where I churned out a minimally viable product and published it in 2 days. The net result is several radio stations across the web and a few thousand dollars in the bank.)
The things weâre doing with software are diverse and jaw-dropping:
Software is doing that, and more: giving us turn-by-turn directions, driving our cars, winning Jeopardy!, challenging Chess champions, letting us communicate with anyone in the world at anytimeâŠthe list is staggering and is only increasing.
And we, software developers, are the ones who make it all happen. This bodes well for our careers.
Building software is a superpower that shouldnât be wasted building CRUD apps for insurance companies. That may be necessary to pay the bills, but developers should build their side projects to advance their goals and tackle the things they want to tackle.
Build your side project, build whatâs interesting to you, build what you think the world needs. If nothing else, youâll expand your horizons. And if it works out, you might just have contributed something useful to the world and even made a little money on the side.
Looking for good software & technology conferences in 2013? I did a bit of scrounging around, talked with some colleagues, and came up with this big list of 2013 dev conferences, ordered by date.
W3Conf February 21-22 San Francisco, California The W3Câs annual conference for web professionals. Latest news on HTML5, CSS, and the open web platform.
Web Summit March 1st London, Great Britain "Our events focus on giving attendees an incredible experience with a mix of world-leading speakers, buzzing exhibitions and effective, deal-making networking opportunities. Our illustrious list of past speakers includes the founders of Twitter, YouTube, Skype and over 200 international entrepreneurs, investors and influencers."
MX 2013 March 3-4 San Francisco, California UX and UI conference. âManaging Experience is a conference for leaders guiding better user experiencesâ
SXSW Interactive March 8-12 Austin, Texas "The 2013 SXSWÂź Interactive Festival will feature five days of compelling presentations from the brightest minds in emerging technology, scores of exciting networking events hosted by industry leaders, the SXSW Trade Show and an unbeatable lineup of special programs showcasing the best new digital works, video games and innovative ideas the international community has to offer. Join us for the most energetic, inspiring and creative event of the year."
Microsoft VSLive Vegas March 25-29 Las Vegas, Nevada .NET developer conference. âCelebrating 20 years of education and training for the developer community, Visual Studio Live! is back in Vegas, March 25-29, to offer five days of sessions, workshops and networking events â all designed to make you a more valuable part of your company’s development team.â
anglebrackets April 8-11 Las Vegas, Nevada anglebrackets is a conference for lovers of the web. We believe that the web is best when it’s open and collaborative. We believe in the power of JavaScript and expressiveness of CSS the lightness of HTML. We love interoperability and believe that the best solution is often a hybrid solution that brings together multiple trusted solutions in a clean and clear way. We love the expressiveness of language, both spoken and coded. We believe that sometimes the most fun at a conference happens in the whitespace between conference sessions. More details at Hanselmanâs blog.
Dev Intersection, SQL Intersection April 8-11 MGM Grand, Las Vegas, Nevada Visual Studio, ASP.NET, HTML5, Mobile, Windows Azure, SQL Server conference. Focused on .NET and SQL developers.
TechCruch Disrupt April 27th-May 1st New York City, New York Technology and startups conference.
Google I/O May 15-17 Registration opens March 13th at 7am. San Francisco, California Probably the most anticipated developer conference in the world. Expecting some news on Google Glass, perhaps some haptics support in Droid devices, maybe a bit on self-driving carsâŠwhatâs not to love?Registration to be opened up early February. Tickets usually sell out immediately.
GlueCon May 22nd-23rd Denver Colorado "Cloud, Mobile, APIs, Big Data â all of the converging, important trends in technology today share one thing in common: developers. Developers are the vanguard. Developers are building in the cloud, building mobile applications, utilizing and building APIs, and working with big data. At the end of the day, developers are the core."
Microsoft TechEd June 3-6 New Orleans, Louisiana Longstanding Microsoft developer and technology conference.
Mobile Conference June 6-7 Amsterdam, The Netherlands Conference for mobile devs, focusing on the future of mobile app development.
WWDC June 10-14 Appleâs highly-anticipated Worldwide Developer Conference. Tickets go on sale April 25th.
Norwegian Developer Conference (NDC) June 12-14 Oslo, Norway Huge developer conference featuring some of the biggest speakers in software, including Jon Skeet, Scott Meyers, Don Syme, Scott Allen, and Scott Guthrie.
Microsoft BUILD June 26-28 San Francisco, California Microsoftâs one big Windows developer event. All the big Microsoft names âfrom Guthrie, to Hejlsberg, to Hanselman — will be there. Expect great technical presentations, tablet giveaways, and an all-hands-on-deck Microsoft powerhouse conference.
SIGGRAPH 2013 July 21-25 Anaheim, California 40th international conference and exhibition on computer graphics and interactive techniques. Graphics, mobile, art, animation, simulations, gaming, science.
OSCON July 22-26 Portland, Oregon Biggest open source technology conference.
ThatConference August 12-14th, 2013 Kalahari Resort, Wisconsin Dells, WI Spend 3 days, with 1000 of your fellow campers in 150sessions geeking out on everything Mobile, Web and Cloud at a giant waterpark.
<anglebrackets> October 27th – 30th MGM Grand, Las Vegas, Nevada Hosted by renowned developer and speaker Scott Hanselman, <anglebrackets> is a conference for lovers of the web. We believe that the web is best when it’s open and collaborative. We believe in the power of JavaScript and expressiveness of CSS the lightness of HTML. We love interoperability and believe that the best solution is often a hybrid solution that brings together multiple trusted solutions in a clean and clear way. We love the expressiveness of language, both spoken and coded. We believe that sometimes the most fun at a conference happens in the whitespace between conference sessions.
Authorâs note: I attended the spring <anglebrackets> in April, and it was positively fantastic. Highly recommend this conference.
If youâre into futurism and technology evolution, The Singularity Summit might be for you, with speakers like Ray Kurzweil and Peter Norvig. The dates for 2013 are yet unannounced.
As for me, Iâm headed to anglebrackets/DevIntersection in April. This dual conference will host speakers like Scott Hanselman, Phil Haack, Damian Edwards, Elijah Manor, Christian Heilmann. Should be a blast!
Know any good conferences not listed here? Let me know in the comments.