The Server-side JavaScript runtime

I’m back, after a short break and much swearing at third party libraries. I was going to use React.NET as that bundles up all the things required, but it has too many opinions about how you call your React components. Since I’m using Redux and react-router this makes most of the code redundant, so I went back to using the raw components.

Additionally, I switched from V8 to using Microsoft’s Chakra runtime because the existing V8 interop libraries do not support .NET Core yet. I’m using JavaScriptEngineSwitcher so it’s not hard to change out the engine later.

The libraries I’m using are:

"Newtonsoft.Json": "7.0.1",
"JavaScriptEngineSwitcher.Core": "1.2.4",
"JSPool": "0.3.1",
"JavaScriptEngineSwitcher.Msie": "1.4.0"

Setting up the JS runtime

The documentation for everything except JSON.NET is lacking. This is common in open source software - people are donating their time, I’m just glad someone’s done most of the work already. Fortunately this is a hobby project so I could take the time to read the source of the unit tests to figure out what to do.

The engine pool

First thing is to create the JSPool configuration. JSPool manages a pool of the JavaScript engines much like a SQL connection pool, this is because much like a SQL connection the engines are all single-threaded and there’s a noticeable startup time. You can tune the number of engines and how often they recycle. If there isn’t an engine to service a request the pool will block until one’s available.

Right now I’m hard coding the settings, but these can easily be loaded from a configuration file.

var poolConfig = new JSPool.JsPoolConfig();
poolConfig.MaxUsagesPerEngine = 20;
poolConfig.StartEngines = 2;

Engine Configuration

JavaScriptEngineSwitcher can load the configuration from the standard ASP.NET web.config and will set this up when you install the nuget packages. ASP.NET 5 doesn’t have a web.config, so that isn’t done automatically. However, I want certainty of our runtime as I’ll be testing against it, so I’ll just configure it in source.

The only configuration I need is to set the engine mode to Chakra Edge. This does limit the app to running on machines with Edge installed, but the beauty of JavaScriptEngineSwitcher is you can change the engine without changing your code.

var ieConfig = new JavaScriptEngineSwitcher.Msie.Configuration.MsieConfiguration
    EngineMode = JavaScriptEngineSwitcher.Msie.JsEngineMode.ChakraEdgeJsRt

Once this has been done I create a small factory that simply creates the new IE JS engine.

poolConfig.EngineFactory = () => new JavaScriptEngineSwitcher.Msie.MsieJsEngine(ieConfig);

Loading the JavaScript

First the file has to be found. I’ve decided I’m keeping this in wwwroot/js/server.js, so I’ll have to use a PhysicalFileProvider to obtain it. Creating this needs the application base path, the easiest way to obtain his is from the IApplicationEnvironment provided by ASP.NET.

var appEnv = provider.GetRequiredService<IApplicationEnvironment>();
var fileProvider = new PhysicalFileProvider(appEnv.ApplicationBasePath);
var jsPath = fileProvider.GetFileInfo("wwwroot/js/server.js").PhysicalPath;

next up a quick function to load the file in to the engine.

public static void InitialiseJSRuntime(string jsPath, IJsEngine engine)

Finally load that all in to the JSPool config. Additionally I set WatchFiles to the JS path, so when it changes the engines will be automatically restarted.

poolConfig.Initializer = engine => InitialiseJSRuntime(jsPath, engine);
poolConfig.WatchFiles = new[] { jsPath };

Binding the service

Wrap all of the above in to a static method

private static JSPool.IJsPool CreateJSEngine(IServiceProvider provider)
{ ... }

And finally `AddSingleton` from `ConfigureServices` call this method, then it's available in the .NET runtime.


The render method

I’m rendering from Razor templates, so I’m creating a Razor helper method for this. One of my goals is to have all pages be proper URLs, so every page can be rendered server-side properly. Combining this with react-router means the pages are pretty small. To start with I import these namespaces

using JSPool;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;

The method is a static extension method on HtmlHelper, like many other ASP.NET mvc helpers. The parameters are the JavaScript method to call, the model to pass in and the bundle name - I’m using browserify to separate the common libraries and views from the page-specific ones, so the bundle contains the JavaScript needed to run the page browser-side.

public static object RenderJS(this IHtmlHelper helper, string entryPoint, 
                              object model, string bundle)

Grab some services. This is a helper extension method so we can’t user proper DI.

var appServices = helper.ViewContext.HttpContext.ApplicationServices;
var pool = appServices.GetRequiredService<IJsPool>();

Due to the limitations of the JS engine calls we marshal everything through JSON. It’d be nice if the engine wrappers did this automatically, but it’d just happen the same way.

var jsModel = JsonConvert.SerializeObject(model);

Then call the appropriate function on an engine from the pool

var result = pool.GetEngine().Evaluate($"pages.{entryPoint}({jsModel})") as string;
if (result == null)
    var logger = appServices.GetRequiredService<ILoggerFactory>().CreateLogger("JSRender");
    logger.LogError($"JavaScript failed to return a string when calling {entryPoint}");
    throw new System.Exception();

Evaluate returns an object so I’ve added check that the return type is correct, just in case. Once the result has returned we deserialize it. I’m lazy so I’ll just use dynamic as the target type.

var resultObject = JsonConvert.DeserializeObject<dynamic>(result);

Last bit of boilerplate, I store an id in the viewbag so I can support multiple react components per page, though I doubt I ever will.

int reactId = helper.ViewContext.ViewBag.REACT__ViewId ?? 1;
helper.ViewContext.ViewBag.REACT__ViewId = reactId + 1;

and with all that done I wire up the HTML needed.

The render methods return a JSON object that looks like {html: htmlString}. This is the HTML output my setup needs - two JS tags for the common libraries and the page-specific bundle, then a call to the render entry point. I use the page prefix when creating the Javascript bundle. Then I render in the content in a named div so React can find it later.

var resultScript = $"<script src=\"/js/libs.js\"></script><script src=\"/js/{bundle}.js\" type=\"text/javascript\"></script>";
resultScript += $"<script type=\"text/javascript\" defer>page.{entryPoint}('component-{reactId}')</script>";
var resultHtml = $"{resultScript}<div id=\"component-{reactId}\">{resultObject.html}</div>";

finally return the result in an HtmlString so it gets rendered without escaping

return new HtmlString(resultHtml);

and we’re done!

Next up I’ll go through my gulp pipeline for taking TypeScript and building the bundles needed to run this.

As always the code can be found in my GitHub repository


The Web UI

Time for the web UI! I’m building this using ASP.NET 5 and ASP.NET MVC 6 for the controller side and React, Redux, Immutable and TypeScript for the view layer. I’m not a fan of node.js, but I do like the web view being defined in one place so running React on the server side appeals to me. I could run it fully in the browser but servers tend to be faster than clients, especially mobile clients.

I’m using React and Redux because they work well with how I like to think about my applications. User actions trigger state changes which then cause a re-render. Having a single state object makes it much easier to figure out what causes problems, and it reduces the overall complexity of the application.

Immutable is just used for confidence that the Redux action-reducer-props pattern is followed without accidentally mutating state. I’ve seen other technology like freezer used for this, but freezer is more of a full replacement for Redux.

Setting up the server

Starting from the default ASP.NET 5 template we’re going to make a few modifications. First we’re going to switch from SQL Server to SQLite storage. If I actually release this I’d probably switch back to SQL Server, but I don’t want to run SQL Server for a website serving just me.

Unfortunately, the Entity Framework migrations aren’t database agnostic, so we first have to remove them.

dnx ef migrations remove

Then we switch over to SQLite in the project.json file by replacing EntityFramework.SqlServer with EntityFramework.Sqlite. At this point I had to make some changes to the configuration to load SQLite. I had to switch the DBContext configuration to use SQLite, so in ApplicationDbContext replace OnConfiguring with this

var appEnv = (IApplicationEnvironment)CallContextServiceLocator.Locator
optionsBuilder.UseSqlite($"Data Source={appEnv.ApplicationBasePath}/mh.sqlite");

and replace the EF configuration in startup.cs with this


services.AddTransient<ApplicationDbContext, ApplicationDbContext>();

and finally at the end of the Configure method I added the automatic migration call

var db = app.ApplicationServices.GetRequiredService<ApplicationDbContext>();


Next I regenerated the migration

dnx ef migrations add CreateIdentity

And now the demo ASP.NET MVC 6 site works, with logins and authentication using SQLite.

Next up I’ll be setting up the Javascript packages and configuring server-side rendering.


Api controllers

The backend Api servers are going to use the WebAPI component of ASP.NET 5 to make a RESTful api. Since I’m using entity framework for the database layer this is a remarkably simple piece of code, especially since most of it will be automatically generated.

Right now I’ll focus on the basic create/read/update/delete (CRUD) methods, later on I’ll get to some more interesting logic when it’s time for automatic transaction posting and budget forecasting.

As last time I’ll just show one class, since right now they’re almost identical. I’ll post snippets when I use more features later on, but right now I want to get enough backend code completed to start on the UI layer.

The Category Controller

ASP.NET 5 makes this pretty easy. Compared to WebAPI 2.0 there are fewer options but commonly used addons - such as dependency injection and attribute routing - are built in.

Let’s start the class

public class CategoriesController : Controller

The route is specified as an attribute. I think this is clearer since you always know the URI to your current class, no more magic string chopping to get the name. The biggest downside is there isn’t a central list of routes, but considering that most of the previous routes contained magic controller names I don’t think this changes much.

Next up member variables and the constructor

private MHContext _Context;

public CategoriesController(MHContext context)
    _Context = context;

Read actions

The Entity Framework context is injected in to the constructor by ASP.NET ready to go. Previously I tended to use Ninject for this, but the new built-in DI framework is perfectly adequate for this app. This context is bound to the request scope and automatically releases the connection at the end.

With that done it’s time for the basic CRUD methods. I’m going to ignore a lot of the setup for database loading, for example eager and lazy loading collections. I’ll add them as needed. First I’ll do the listing of all categories.

public IEnumerable<Category> Get() => _Context.Categories;

This is probably the simplest non-trivial method I’ve ever written. I don’t think this is going to be too common, normally there’ll be some sort of mapping and pagination going on here.

Write Actions

Now we want to get the detail on a specific category. This is a bit longer as I need to be able to return 404 not found on an invalid ID.

public ActionResult Get(int id)
    var category = _Context.Categories.FirstOrDefault(c => c.CategoryId == id);
    if (category == default(Category))
        return new HttpNotFoundResult();
    return new ObjectResult(category);

I can’t use a specific type for the action result since there’s two possibilities - MVC 6 removed the HTTPResponseException. Fortunately I can use ActionResult and return the appropriate subtype, ObjectResult will handle serialisation for me.

That’s the read methods done, now for the writes. First I’ll define an object for the write APIs, I find it’s easier to follow than partially reading the underlying entity. This is generally what will happen for all APIs, including the read ones, but I’m being lazy.

public class ApiCategory
    public string Name { get; set; }
    public string Description { get; set; }
    public bool IsIncome { get; set; }

With that done the POST method can be added to create new categories

    public void Post([FromBody]ApiCategory newCategory)
            new Category
                Name = newCategory.Name,
                Description = newCategory.Description,
                IsIncome = newCategory.IsIncome

I could use the asynchronous methods to run this, but given we’re talking to local SQLite databases I think the overhead of async is probably larger than the benefit. The frontend UI will be using async methods more.

I’m going to allow updates of name and description, so for that we need a PUT method

    // PUT api/values/5
    public async Task Put(int id, [FromBody]ApiCategory newCategory)
        var category = _Context.Categories.First(c => c.CategoryId == id);
        category.Name = newCategory.Name;
        category.Description = newCategory.Description;
        await _Context.SaveChangesAsync();

And finally delete

    public void Delete(int id)
        _Context.Categories.Remove(_Context.Categories.First(c => c.CategoryId == id));

It’s all pretty simple now, but it’ll get interesting as we add more relations and expand our API.

Next up I’ll start on the UI, which will be more interesting.


Models and entities oh my

I promised code, so it’s time to get coding! I’m going to start with the api server, then implement the UI on top of that.

The backend is going to be built using ASP.NET 5 and WebAPI for the web layer. The database is going to be sqlite and I’ll be using Entity Framework 7 for the database access layer. These choices are mostly because they’re the defaults in ASP.NET, but I don’t have much experience with Entity Framework so it’s a great opportunity to learn.

I’ll be using Visual Studio 2015 community edition, but since the ASP.NET 5 projects don’t require MSBuild this could easily be Visual Studio Code on Mac or Linux. You’ll need the ASP.NET Release Candidate or a later version.

Setting up

First thing, new project time. In Visual Studio create a new C# WebAPI project targeting 5, or use yoman to create the template.

Now we have a skeleton project with just what’s needed to run WebAPI. You can run dnx kestrel to see the default page, but I’m just going to get started adding code.

First we need to add entity framework, so add these to the dependencies section of your project.json file:

"EntityFramework.SQLite": "7.0.0-rc1-final",
"EntityFramework.Commands": "7.0.0-rc1-final",
"EntityFramework.Sqlite": "7.0.0-rc1-final"

Visual studio will automatically fetch the nuget packages as you save the file, if you’re not using Visual Studio you might need to run dnu restore.

While we’re in project.json we’ll add the entity framework CLI commands, so just add

"ef": "EntityFramework.Commands"

to the commands section. This will let us generate migrations and automate some common entity framework commands.

Model Objects

First we’ll add our models. I’ll just cover a couple here since it’s quite repetitive. Since Categories are central to our data model lets set that up first, then I’ll show the budget relation.

Entity Framework 7 uses the attributes from System.ComponentModel.DataAnnotations to configure the models as Plain Old C# Objects (POCOs, like POJOs from Java-land). This means we don’t need any external XML or C# configuration files or any common base classes to set up our models, which coupled with convention-based configuration makes the models quite succinct. This also means we can treat our models as any other C# object - they can easily be serialised to JSON or other formats without causing a huge property storm.

Creating the models

Let’s start our category with a name and a sequential key for the primary key.

public class Category
    public int CategoryId { get; set; }

    [StringLength(100), Required]
    public string Name { get; set; }

CategoryId follows an Entity Framework convention so we don’t need to specify that it’s the primary key or that it needs a sequence, it’s just assumed. You can disable or override this if needed, so don’t worry about being stuck with this.

Next up budgets!

public class Budget
    public int BudgetId { get; set; }

    public Category Category { get; set; }

Specifying the Category class will automatically generate the column and foreign key. Now this is only half of the relationship, so let’s go back to Category and add the link to the budget. As we can only have one budget per category this is a simple relationship, so add this to the Category class.

public Budget Budget { get; set; }

Unfortunately this is deceptive. One-to-one relationships need a master side to hold the ID, but there isn’t an attribute for this. We’ll configure this in the Context.

Configuring the context

Next up the db context and the service bindings so entity framework loads properly. Entity Framework uses with a DbContext to hold all the mapped objects and track their changes. You can have multiple contexts if you have different database connections or different object lifecycles, but we only have one and it lives in a SQLite database.

public class MHContext : DbContext
    public DbSet<Category> Categories { get; set; }
    public DbSet<Budget> Budgets { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        var appEnv = (IApplicationEnvironment) CallContextServiceLocator.Locator.ServiceProvider.GetService(typeof(IApplicationEnvironment));
        optionsBuilder.UseSqlite($"Data Source={appEnv.ApplicationBasePath}/mh.sqlite");

The Context class is used both when generating the migrations and when starting the application. OnModelCreating is used to override or add to the models determined from the attributes. In this case Entity Framework is being configured so that Category has one Budget, and the Category table should hold the reference.

OnConfiguring is called when Entity Framework is used to access a database. The only magic here is the CallContextServiceLocator, which is provided as part of ASP.NET. Since this class can’t be created via IoC - the context has to work outside of the built-in ASP.NET IoC chain - a service locator is used..

IoC and migrations

Finally it’s time to add the Entity Framework bindings to Startup.cs, which is the ASP.NET 5 equivalent to Global.asax, but in ASP.NET 5 this is the entry point to the application (you can read command line arguments!)

    public void ConfigureServices(IServiceCollection services)


That’s the end of the boilerplate! Just one more command to run - Entity Framework has to create database scripts if it is going to migrate the database automatically. This has to be done from the command line.

dnx ef migrations add

Entity Framework will add a couple of files to your project containing the database creation scripts, then we’re ready to go!

I’ve skipped over some of the models since there’s not much interesting in them, but the whole solution is up on my github page.

This will set the scene for the api layer, which I’ll cover I’m the next post.


Planning the finance app

I don’t like deciding too much up front, but understanding what you’re trying to do helps. I tend to focus more on the logic side rather than the UI - since I’m not that great at UI work - so I’ll start from the data model and work up.

What I want out of this

To figure out the data model I need to figure out what I want out of this. The two biggest parts are setting a budget and seeing how well actual spending compares to that budget. In the future I’d like to do things like analyse spending patterns and maybe intelligently inform the budget, but that’s a later goal.

I also want to have some implementation of shared accounts. Since I don’t have a joint account with my wife we transfer money between our individual accounts a lot. It’d be nice to be able to have a virtual spending account to track all this. I have ideas on how to implement this with the architecture from the first post, but that’s for another time.

I’m also going to defer repeating transactions. They’re useful from a budget point of view, but they don’t need to be there from day one.

Linking budgets to spending

I’m going to follow a tried-and-tested method, categories. Categories can have budget and actual transactions linked to them and compared.

I’m going to give categories a type - expense, income and transfers. Each has a natural transaction direction, but I won’t limit it as sometimes you want to reverse it. For example, medical expenses then insurance reimbursements - the reimbursement shouldn’t be counted as income but it does offset medical expenses. Perhaps you might want to do this differently, but I’m writing this for me!


Just like at the bank! I’m not sure how different account types should behave, but I’ll store the kind (everyday, savings, credit card, loan) for later.

Importing Transactions

I do almost all of my transactions via EFTPOS, so I want to grab as much from my bank as possible. I wish there was a handy API or automatic email drop, but until that happens I’ll have to deal with period uploads. This means there’ll be duplicate transactions to detect on import.

To start with I’ll only allow OFX imports as my bank support it pretty well. CSV would require more custom work and isn’t as portable.

I’d like to automate this, but I’ll add an API to start with.


I rarely use cash, but it’d be nice to be able to track it. I’m currently planning to treat cash as another kind of account and transfer in to it.

Manual Transactions

Most transactions will be imported, but since there will be cash accounts we’ll need manual transactions. Hopefully the transaction-matching logic will cope with this so transactions can be entered before they occur at the bank (repeating transactions?), but that’ll come later.

Initial data model

data model 1.png

And that’s it for now. Next up we’ll start some code!


Time for a new project

‚ÄčIt’s been a while since I posted anything here, so I’ve decided to find a topic to motivate me.

This post is the first in a series about my personal finance / budgeting system. I’m creating this for several reasons, one of course is to track my budget, but that’s not the main reason.

I’m building the application with an interesting architecture. I don’t think it’s novel, but it is new to me. This architecture is based off observations of other personal finance applications. The main thing I noticed was that they all seemed to use a large multi-tenanted database, with thousands of customers sharing one store. This required sharding the data in to many databases to scale and required careful design of the scheme to optimise storage.

I wondered if instead of few large stores, many smaller stores would work. This isn’t new, and the arguments against this are well known. Most database servers require maintenance, they connect to networks so need security patches and updates, backups need to be managed and restores tested, and migrations need to be tested and deployed in sync with application code. Additionally database creation and access control are traditionally infrequent tasks performed by DBA vetted scripts.

These are all good arguments, but on modern hardware we do have a fast and safe alternative to full database servers - SQLite.

SQLite is an embedded SQL engine that offers broad sql language coverage, ACID compliance and even multi-user read access (though I won’t use this feature). More importantly it stores data in a single file (or a few files if you use a write-ahead log) making it easy to move and backup. Additionally this would make a phased roll-out possible, with each store being updated on demand.

The best way to see how crazy an idea is to build it, so that’s what I’m doing!

Broadly it will look like this:

WebUI - Broker - API/DB

The broker takes authentication information from the web interface and forwards it to an api server for that user. The broker is also responsible for starting api servers when there isn’t one for a given user. Api servers will shut down after a period of inactivity.

I’m also taking this as a chance to use ASP.NET MVC 6, .NET Core and React, hopefully with server side rendering, and also to write more blog posts. Hopefully linking the two will motivate me to do both more.


When it hurts...

Since the rise of comments on news websites I’ve seen more than a few asking how someone chronically ill can take holidays, or that they “look well enough to me”. Here’s my story.

I have an undiagnosed cronic illness. I’ve been suffering for six months now.

There are no outward signs of any problems, if you looked at me without specialist knowledge you’d have no idea that anything was wrong - and even then there’s only a few signs. I work regular hours, cook and clean my own house and drive a car when needed. Most of the time I’m a “normal person”.

However, some days my body decides to be different. Joints ache, muscles sieze up, my vision and hearing become fuzzy, and my limbs become weakened. At its worst I’m unable to sit or stand and have to take time off work, and stairs become hard to use. As I work in software development anything that affects my vision and ability to type is seriously crippling to me.

I’ve taken so many anti-inflammatories I’ve developed an intolerance to them, I have to take the non-subsidised meloxicam instead of the subsidised naproxen. Frustratingly enough my health insurance only covers subsidised drugs. I also regularly take the maximum dose of paracetamol (1 gram every 4 hours), which isn’t really good for you. At some points I’ve had to resort to codeine to manage the pain, but fortunately this is rare.

I’ve seen two specialists, had many blood tests and had an MRI. Nothing conclusive has been found, other than a slightly raised level of anti-nuclear antibodies. This, along with my symptoms, points to an autoimmune problem, but there’s normally one critical sign I’m missing that points to a given diagnoses.

I’m lucky - I have health insurance partially paid for by my employer. This means I’ve been able to bypass public waiting lists. I know some people get annoyed at this, but personally I don’t see the issue. I’m still paying for the public system through my taxes and I don’t intend to stop, but since I can afford to I’ll lighten the load on the overloaded public system.

That still doesn’t change the reality though, 95% of the time I’m able to do things like everyone else. 5% though I’m barely able to walk around my own house. If this increased to 50/50 I wouldn’t be able to work full time, but half the time I’d be well enough to travel.

So remember, just because somebody “doesn’t seem sick” it doesn’t mean that they aren’t, you’ve probably just caught them on a good day.

I don’t have comments on this blog, but feel free to contact me on twitter.


It's all about context

This is something that’s caught myself out over the years, and I’m sure a lot of other people have done the same. When writing code or documentation, or even an email we leave out details we think are “obvious”. What we don’t think of is our own specialist knowledge - the invisible knowledge we have of our environment, or our context.

These days I work on accounting software, but in addition to my software development degree many years ago I also started an accounting degree using my elective papers. This means I have more knowledge than a lot of new developers. I have to think about whether I’m applying programming or accounting knowledge when writing my comments.

For example, this would be obvious to someone with experience in both fields (in pseudo-code):

public enum DrCr { Debit, Credit }
public class AccountValue { Decimal Value; DrCr Sign }
public AccountValue GetAccountValue(Decimal value, DrCr positive) {
    if (value >= 0) { return AccountValue(value, positive); }
    return AccountValue(-value, positive == Debit ? Credit : Debit);

but for someone outside the accounting field this makes no sense - why not just use negative numbers? The reason is that accounting predates the acceptance of negative numbers, and history is hard to change.

I’ll also add a small anecdote I saw printed in the newspaper here a few years ago. Alas this was before online archives, so I can’t link to the source.

A couple was visiting my city from another city in the same country. They were planning to travel out to visit family on the train. In their city there is one rail line, so when they looked at the departures board and saw ‘Stopping at all stations’ they got on that train. Unfortunately we have four lines here, and the didn’t know to check which line they were heading for. A bad design of signage had these two critical pieces of information separated, and since everyone here - or from a city with multiple rail lines - instinctively knows to look for it this was rarely a problem.

Context is everything.


Swift vs C# - My take

Inspired by this comparison of Swift and C# I’ve put together my own list of my favourite features of each language.

My day job is a .NET developer, and I know C# and its pitfalls quite well. I’ve been playing with Swift as soon as it was released.

Throughout these examples I’ll be concentrating on the syntax, I may leave some details unimplemented.

Nulls / Nils

In Swift nil-valued objects are opt-in. You cannot pass a nil unless the function or property allows it.

C# - you’ll get an error at run time
public void DoImportantThings(string cannotBeNull)
    if (cannotBeNull == null) 
        throw new InvalidArgumentException(...)

public void SillyMethod() 
Swift - you’ll get an error at compile time
func doImportantThings(string : cannotBeNull) { ... }
func sillyMethod() { doImportantThings(nil) }

This is fantastic! So many errors are caused by unexpected nulls, and that whole set has been eliminated! Next I can hope for a language with built in data validation.


Lets pick list sorting as an example

myList.Sort((a, b) => a.CompareTo(b))
myList.Sort { $0 < $1 }

Personally I'm uncertain about Swift's implicit argument names, but I think it'll stick for small functions.

Now say we want to reuse that sorter. Idiomatic C# would break it out in to a method call

private int MySort(int a, int b) 

or you could use a function variable, but that breaks type inference

Func<int, int, int> mySort = (a, b) => { a.CompareTo(b) }

Swift does a bit better on the second count

let mySort = { (a : Int, b:Int) in a < b }

the return type is inferred, but since Swift lacks automatic generalisation (F#'s reference) it requires type annotations. Given the rest of the functional suite I hope auto generalisation comes to Swift

Data types

C# follows other C-based languages in data types. There are primitive types, functions, classes, structs and enumerations. Functions and closures are first class, which is great, but we're still stuck with the old C/C++/Java ways of grouping things. I won't go in to these here

Swift has a much richer set of types, and coupled with the pattern matching system this leads to better interaction


While C# has tuples they're just another class. This makes them somewhat hard to work with, for example C# to return multiple values:

public Tuple<int, int> GetTwoInts() 
    return Tuple.Create(1, 2); 
var items = GetTwoInts();
var item1 = items.Item1;
var item2 = items.Item2;

with Swift:

func getTwoInts() -> (int, int) { return (1, 2) }
let (item1, item2) = getTwoInts()


In C# enumerations are simply names for integers. They always have a numeric value you can cast from.

Swift's enumerations are really algebraic data types, also known as a tagged union. The values of an enumeration are not an alias for an integer, but a value in their own right. You can assign a number to an enumeration value you can also assign a string or any other value.

For even more power you can assign variable values to enumeration values. For example:

enum socketState {
    Reading(string : buffer)
    Writing(string : buffer)
    Waiting(int : timeout)

switch (s : socketState) {
case socketState.Reading(let buffer):
    return .Reading(buffer + readMore())
case socketState.Writing(let buffer):
    return .Writing(write(buffer))

You can attach any sort of type to an enumeration value. This example also shows off enumeration decomposition to extract values out of the enum - you can also do this with tuples.

Switch statement

While C# lets you switch on constant strings as well as enumerations, Swift lets you match pretty much anything, acting like an if...else if chain, but including decomposition of tuples.

func getHttp(url : string) -> (int : status, string : statusMessage, string : body) {}

switch (getHttp(“…”)) {
    case (404, let message, _) : 
    case (403, _, _) : 
    case (200, _, let body) : 

While there are other ways of accomplishing this it is easier to manage state machine dispatches like this.

Also instead of C#’s infuriating compulsory break; on each statement and complete inability to fallthrough after doing some work, Swift takes a more pragmatic approach and breaks automatically unless you use the fallthrough keyword.


One thing I love about .NET is LINQ, not just the nice operations on lists - which can easily be implemented in Swift - but also the ability to construct expression trees and interpret them at runtime. This allows a fantastic degree of expressiveness, especially when interacting with external systems.

Alas Swift doesn’t have anything like this so far, but with the algebraic data type as a first class member it’s easier to implement a DSL with an alright syntax. Hopefully this will be improved in the future.

Personally I’d like a fully metaprogramming model, macros rather than runtime interpretation.


In C# we have extension methods - methods that behave like they’re part of a class, but really are static methods that take an instance as the first parameter.

Swift takes this one step further, extensions can add protocols (similar to a C# interface) to an extant type even without access to the implementation. You can also add computed properties and arbitrary functions. This makes it easy to add other types to your own protocols, and to hide internal mucking around from outside influence.


Most languages have a concept of visibility for items - public, protected and private at least. So far Swift doesn’t have these, but it’s early days and I expect them to be implemented in the future.


Reddit emoticon text expander Safari extension

Since I’ve been spending more time on reddit I’ve been somewhat annoyed at how some subreddits hide conversations inside emoticon text, mostly because it takes so long to read with all the hovering.

Since Google didn’t bring up any leads, and the Reddit Enhancement Suite didn’t help, I created my own! Though when people find this they’ll probably tell me all about the much better solutions people have already made.

The code was inspired by, but not based on, the (Super Reddit Alt-Text Display)[] userscript.

Go from this


to this!


So here is the aptly named titletext.safariextz