Web Applications in the age of Azure

Increasingly, I’ve found myself turning to Microsoft’s Azure platform for new web projects. The convenience of on-demand infrastructure, combined with a usage based payment model is compelling; but what makes Azure shine are the rapidly expanding set of supporting services available.

When you fist start writing Azure WebSites, you quickly run into complexities that are often unfamiliar to developers who only ever targeted on-prem IIS servers. You can’t rely on the local file-system, sites span multiple server instances, you have to ship dependencies along with your application’s code, and so on.

These issues aren’t new for those who develop for web-farm environments, but even the smallest Azure WebSite forces you to consider complexities of scale –issues that plagued only the largest of projects in the past.

Fortunately, Azure provides a complete set of services to address those complexities, and using them is fairly easy. Caching services handle shared session state between instances of your application. Azure storage substitutes for the local file system. Azure WebJobs handle background processing and scheduled tasks.

There are also services analogous to most of the common external dependencies; Azure Search, Azure DocumentDb, Azure SQL, Azure AD, etc. Some of these are so similar to their on-prem equivalents that using them is largely transparent, but most act more like a ultra-modern replacements for their older, on-prem cousins.

If you need a service Azure doesn’t offer directly, you can always spin up an Azure VM. The Azure Gallery has pre-built VMs for tons of common services, or you can roll out a standard Linux or Windows VM and install whatever third party software you need.

It took me a little time to learn my way around the Azure platform. At first, I resented the additional effort, especially in small projects. After the rather modest initial learning curve though, I have found that Azure provides far more than simple replacements for the same old services I’ve always used. They offer a lot of additional value that I’ve never had easy access to before.

Here is an example. The first time one of my Azure applications needed to handle user file uploads, I found it cumbersome to use Azure Storage. I had to provision a storage account, pull down the nuget packages, then configure my application to talk to it. It only took about 20 minutes the first time, but it seemed like a hassle compared to just calling into System.IO.

After getting it all hooked up though, I discovered that Azure storage wasn’t difficult to use, and it eliminated a lot of common problems. I no longer had to worry about mapping server paths, dealing with file and folder permissions, or file contention issues. It just works, and it scales without thought.

Still later, I came across the need to write information to a queue so a background process could act on it later. Normally I’d have to create a table in my database, or drop custom files on the file system to share information with the background process –or worse, deal with Microsoft Message Queues (I still have nightmares). After having setup Azure storage for the file uploads though, I also had access to Azure Storage Queues at my finger tips.

I had similar results when I had to setup a cache service for handling session state. It was a tad inconvenient, but when I needed to cache data I’d fetched from the SQL database, I already had a super-easy, super-fast, cloud-scale caching service right there waiting for me.

Sure, I can setup local servers and services that can do any of these things, but in Azure I don’t have to. These services are already there, all I have to do is turn them on.

It has gotten to the point where I really miss Azure whenever I’m working with on-prem applications. I wish IIS could run WebJobs, and that my local ADFS server supported OpenID Connect. I want a search server on-prem that doesn’t require voodoo-devil-magic to setup and maintain. Working outside of Azure has become an inconvenience.

TicketDesk 2.5 – Progress and Developer Notes

I wanted to give everyone an update on the progress of TicketDesk 2.5, and take a few minutes to explain the general architecture that’s starting to take shape over on the develop branch at CodePlex.

Projects and Architecture:

The Database supports any version of SQL Server 2008 or higher, including Azure SQL (but not compact edition). The design of TicketDesk would also fit a document database very well too, so TD may eventually end up having a RavenDB and/or Azure DocumentDB variant.

The Schema is managed by code-first migrations, with the execution of migrations handled by startup initializers and/or on-screen admin tools. It uses multi-tenant, code-first migrations. The identity and the business domains have separate contexts and models, each of which are tenants within the same DB. Similarly, search and email will be separate tenants too.

TicketDesk.Domain is the core business layer. It has no dependencies on web-specific frameworks, azure frameworks, or specific security providers. It does have a dependency on entity framework, and leverages EF as a full scale business service platform, not just as a data access technology.

I call this a “pervasive EF Domain model”. It’s a lot like most domain models you see described in any .net book or tutorial, but without unnecessary abstractions to hide the entity framework components.

The Entities directly contain their own business logic, while DbSets act as generic repositories. Custom repository functions are provided mostly through extension methods. Some extensions are defined in the domain assembly, while those with web specific dependencies are defined within the web application instead –which is why a DI framework or IoC container has’t been necessary within the domain project.

The DbContext is treated as the root business service, and it provides the unit-of-work pattern. Truly cross-cutting business logic will be handled directly by the DbContext, or through extension methods and helper services.

The application doesn’t need a formal DDD style design, but with search and email breaking out into separate services, the design is headed in that general direction. By the time TD 3 is complete, there will likely be a formal service bus mechanism handling communications between separate root business contexts, and this will drive a DDD style eventual consistency pattern.

The first parts of that design will appear in 2.5 with the queue mechanisms that support email and search. It likely will never adhere to the full DDD battery of principals and patterns, but it will borrow heavily from those designs where they make sense.

TicketDesk.Domain.Legacy exists only for conversion of TD2.1 databases… it’s just an isolated container for EF migrations and migration helpers. It has a completely custom initial migration that upgrades a TD 2.1x database to the starting TD 2.5 schema. After that, the regular TD 2.5 migrations from the main domain assembly will bring the database up to the final schema version.

TicketDesk.Search encapsulates all search related functionality. Currently it has local lucene and azure search providers. This could be decomposed into separate assemblies, but I haven’t seen the need yet to go that far with it yet.

Currently, it also fakes a “queue” based mechanism, but before TD 2.5 ships this will be replaced by a more formal queue management system.

TicketDesk.Identity encapsulates all non-web/owin specific security functionality. This is an odd one architecturally, since much of the overall identity system does need dependencies on owin middleware components. Right now, the core EF and aspnet.identity stuff is isolated here, but the web app layers on additional user and role managers.

Much of this is boilerplate identity code borrowed from stock samples, so as I refactor it for TD’s specific needs, I’m thinking seriously about moving all of the identity stuff to this assembly, and letting it have owin dependencies. I’m waiting until I tackle ADAL and/or OpenID Connect (for AD federated security) before I decide for sure, but I don’t want to go as far as to write a custom abstraction layer for this thing.

Either way, the identity stuff will remain decoupled from the business domain assembly entirely.

TicketDesk.Web.Client is the main web application. Nothing special about this design, except that it is an ultra-modern use of the latest MVC stack. It uses Owin/Katana, Aspnet.Idneity, and all the new toys like that.

It makes light use of the Dependency Resolver, with Simple Injector as the underlying DI service. The UI is written to bootstrap 3, with some light jQuery and custom javascript here and there. It does use the jquery unobtrusive ajax stuff for partial view rendering –a concession that makes it easier to port the old UI without having to completely re-invent the entire design.

Mail / WebJob: I haven’t implemented these yet, but email will be split out like search, and will use a DB queue table when running on-prem, and will use Azure Queue Storage when running on Azure. On Azure, mail delivery will be handled by Azure WebJobs.

What I’m working though now, or in the very near future:

Building out the rest of the core UI:

I’m working my way through porting the UI to MVC 5, Razor, and bootstrap 3 now. TicketCenter is close to done, and I’m currently working on the New Tickets screen. Afterwards I’ll get into the main ticket viewer/editor.

The goal is to reproduce most of the current behavior exactly as it exists in TD 2.1. Later, I’ll revisit the behavior to further streamline it. The Viewer/Editor is an command/action design, rather than a view/edit/submit design, so it lends itself better to an MVC style design pattern anyway. There are a few activities that do need some smoothing out, but most of that will wait until TD3.

Main Text Editor:

TicketDesk has alternately used an HTML WYSIWYG editor, or a Markdown editor in past versions. I’ve always intended to support both at the same time, but for some reason never seem to get around to it.

TD 2.5 will offer both options though. I have the PageDown editor (markdown) working in the new ticket editor already, but I also plan to implement a limited html editor –probably SummerNote. The back-end has always supported both kinds of content, so all I have to do there port the current code over from TD 2.1.

There will be admin settings where the admin can choose which editors are enabled, and which is the default. If multiple are available from the settings, each user will be able to set their own preference.

Ticket Center Lists Editor:

TD 2.1’s ticket center is designed around the idea of user tailored lists (my tickets, history, open tickets, etc.). In TD 1, you could customize these lists, and I intended to make that a feature of TD 2 as well. A customization screen appropriate for non-technical users is a big challenge, and TD 2 also had a significantly more complex way of managing settings for these lists, so I never had time to make that happen the way I wanted.

For TD 2.5, we will at least have tools to let admins change the default list definitions, maybe by just letting them edit the raw JSON if nothing else. A full end-user custom list designer would be nice, but it will probably have to wait on TD3.

Images & Attachments:

TicketDesk has traditionally only supported file attachments, but not embedded images via the text editor. It stores those files in the database directly. This makes it very easy to move the database from one server to another, and also simplifies backups as well. While performance hasn’t been a problem, this isn’t the best approach in cloud or hosted deployments… if nothing else, the cost of renting SQL storage space is very high compared to file or blob storage.

TD 2.5 will need a plugable storage provider that can store attachments on the local file system (or file share ), or in Azure Storage Blobs. The legacy database migration feature will need to move existing file data to the storage provider before it removes the old attachments table.

Eventually I hope to have support for popular file storage solutions like dropbox and friends.

The front-end text editors will ideally be able to handle inline image uploads, though I don’t think a full media gallery is really necessary –image re-use almost never happens with help desk type systems anyway. If an uploaded image is bigger than a certain size, it should be swapped for a thumbnail in the editor text, and the full image included as an attachment instead.

Eliminate MVC Areas:

I don’t like what areas do to MVC’s routes and navigation, especially with the newer attribute routing stuff. It was cool back in the days of MVC 2 and portable areas, but has outlived its usefulness in MVC 5.

Right now, I do have a couple of areas setup in the code, but I plan to remove these very soon, then cleanup the action and url links so they don’t have to specify an area as custom routedata each time.


TD 2.5 will have a “first-run” setup experience. There is a sort of stub for this already, but this will be replaced by an expanded wizard to walk the admin through setting up everything. Most important is setting up the database, security, email, and search features.

Part of this experience will include directing the admin on how to make a few changes in web.config that can’t be automated easily, like adding custom machine keys and such.

Localization (pre-support):

I’ve already have this worked out in TD3. The UI text will be supplied from resource files, instead of being hard-coded. I haven’t quite gotten into the string-heavy part of the system yet, but as soon as I do, I’ll start moving all of the text to resource files. TD 2.5 itself will not provide a full internationalization experience out of the box, but I want to go ahead and get the code in shape for the features coming in TD3. This will also make it super easy for developers to localize the system themselves.

Thanks to everyone for their support and patience!

Dev Diary – Scaling search for an Azure WebSite

azuresearchAdding real search capabilities to a custom web application is never easy. Search is a complex and deeply specialized area of development, and the tools available to us regular developers are monstrously complex.

With TicketDesk 2, I used the popular Lucene.net library to provide search. Ported from Apace Lucene (Java), this is the core technology that powers almost every popular search service, appliance, and search library on the market.

Once you’ve tackled the initial learning curve, Lucene.net isn’t all that difficult to leverage in a simple system like TicketDesk. It is freakishly fast, super flexible, and is a powerful search solution –not quite Google good, but close enough for most applications.

The problem with Lucene is that the design revolves around indexes stored on a traditional file system. There are 3rd party extensions that let you store the indexes in a database or in the cloud, but internally these all mimic the behaviors of a file system –that’s just how Lucene works.

You can have many components querying an index at the same time, but only one can write to an index at a time. Normally this single-writer limitation isn’t a huge problem. You code your application so it creates just one writer instance, then share it with any components that want to make an index update. As long as you keep things synchronous, it tends to work fine.

And here lies the problem. TicketDesk 2.5 and 3.0 are designed to run at scale, and will ship ready for deployment to the cloud as an Azure WebSite. In this scenario, there can be several instances of the application running at the same time, each needing to write to a single, shared Lucene index.

I spent a full week trying to find a way around the single-writer problem. WebSites in Azure shouldn’t write to their filesystems. Anything written locally is volatile, and vanishes whenever Azure automatically moves the site to a different host. So, I started with the AzureDirectory library for Lucene, which lets you store the search index in Azure blob storage. This works well, and gives Lucene a stable place to store shared indexes in the cloud.

The second problem was keeping multiple web site instances from writing to the index at the same time. Even though the index is in blob storage, Lucene still demands an exclusive write lock. Each websites can see when the index is locked by another writer, but there isn’t a way to know if the lock is legitimate, or an orphaned lock left behind when some other instance went down unexpectedly.

The only easy solution is to make sure there is a separate application to handle all index writes, and that there is only a single instance of that application running. You can scale the websites or other clients, just don’t scale the index writer application.

WebJobs were designed specifically for handling background on behalf of Azure WebSites, so I started there. Each website would queue index updates to an Azure Storage Queue, then the WebJob could come along and process the queue in the background. But WebJobs scale with the websites, so if you have multiple websites, you also have multiple webjobs. Hopefully, in the future MS will give us the ability to scale webjobs independent of the websites they service.

So the only remaining solution would be an old fashioned worker role. They scale independently –or in this case, can be instructed not to scale. This works well, but I just don’t like the solution. Effectively, the worker role ends up being a half-ass, custom search server. It costs a decent amount of money to run a separate worker role instance, plus it complicates the deployment and management of the entire application.

Failing to find a way to continue in Azure with custom Lucene indexes without a centralized search server, I figured I’d just design TicketDesk to take advantage of the existing Azure native solution –Azure Search Services. It is easy to code against (relatively), and there is a free tier that should be suitable for most smaller shops. For larger shops, the costs of a paid Azure Search tier is still reasonable when compared to the costs of a dedicated worker role.

So, out of the box, TicketDesk 2.5 will include at least two search providers; a Lucene provider for on-premise single instance setups, and native Azure Search for cloud deployments. I will eventually add an alternative for on-premise webfarms, and non-azure cloud VMs. In the meantime though, you could still scale in your own data-center by using Azure Search remotely, or stick with Lucene and manually disable the search writer on all but one instance of the site in the webfarm.

One additional note of interest: Azure Search is still in preview, and it doesn’t have an official client library for .Net yet. There are two 3rd party client libraries though; Reddog.Search and Azure Search Client Library. Both are free as NuGet pacakges, but only Reddog.Search has a public open source repository. Also, Reddog has a management portal you can run locally, or install as an Azure WebSite extension.


Asp.net Developer: Why we didn’t hire you.

Photo by striatic (Creative Commons)

In the last year, I’ve interviewed around 30 candidates. These were for mid to senior level asp.net positions at three different companies. Sadly, I’ve only recommended hiring two. Each case is unique, but here are some common reasons for a thumbs-down vote.

You don’t know JavaScript:

Depressingly, none of the candidates had a strong background in JavaScript.

You want a web development job, but can’t code in the only language web browsers can run? Tell me more!

Ten years ago we mostly avoided JavaScript, and just did everything on the server. You could scrape by without knowing much JavaScript. Five years ago, I would have been worried about your lack of commitment amid the rising popularity of *.js frameworks, but if you were otherwise qualified I would still risk it –hoping you could learn enough javascript on the job to keep up.

Today, JavaScript is everywhere. It’s in the browser. It’s on the server. Some of your developer tools and utilities use JavaScript. JavaScript is the native language in many enterprise-grade database systems. JavaScript has become a basic requirement for just about any kind of development, and it isn’t an easy language to work with. Mastering the techniques, patterns, and frameworks needed for production quality code takes a lot of time and practice.

So, if you walk into my interview without a solid grasp of JavaScript, you aren’t going to get the job. You simply aren’t qualified to code for the web.

Never used distributed source control:

I get that your skills with DVCS are thin. Most business employers aren’t using a DVCS yet. Some don’t use formal source control at all (which is terrifying). Most candidates had a little experience with SVN or TFS, but a full third had no source control experience at all. Only a handful had used Mercurial or Git, and several hadn’t even heard of either.

I can teach the basics of DVCS to any competent user, even a non-developer, in just few hours. I’m not worried about the lack of the skill itself –a PluralSight video and a copy of SourceTree and you’ll be good to go. But the fact that you don’t already have significant experience with at least one popular DVCS is a major red-flag.

The entire asp.net stack, most of the popular 3rd party libraries, and even most of the developer tools are open source projects, hosted on the web in public Git or Mercurial repositories. Most build systems, package managers, and continuous integration systems rely on DVCS technologies. The fact that you aren’t comfortable with at least one such system tells me that you aren’t keeping up with your profession, and aren’t participating in the general developer community.

You don’t know anything about design patterns.

When you list Asp.net MVC on your resume, I absolutely will ask about design patterns. My expectations are low, but I will still be disappointed if you have no clue what I’m talking about.

I’m fine that you don’t know Martin Fowler’s name, or about the Gang of Four. If I ask you to name an example of an abstract singleton factory in asp.net, I won’t be surprised when you to look at me like I just broke out in interpretive dance. But I do expect you to know what the term “design pattern” means. You should know that “MVC” is the name a design pattern — even better if you can explain a little about the pattern itself.

I’m not an academic design pattern guru myself, but some patterns are so common that it is difficult to discuss code with someone who doesn’t know the basic terminology –singleton, factory, repository, observer, etc.

Asp.net development in particular revolves around a very specific set of design patterns; IoC, MVC, Repository, and Unit of Work being the most relevant. If you aren’t at least vaguely aware of these patterns, then you can’t possibly be proficient with the asp.net MVC framework.

You don’t have to be able to debate the merits of domain driven design vs. onion architecture. You don’t have to be able to tell me the distinction between transaction script and unit of work. But if I ask you if a C# static class is a singleton, you should at least understand the words coming out of my mouth… even if you can’t give me a good answer to the question itself.

Day laborer:

You only know technologies your former employers used, but nothing else. Your last company still used ADO.NET Datasets, but you haven’t even bothered to read up on LINQ to SQL, Entity Framework, or NHibernate?

Congratulations, you did the bare minimum necessary to earn a paycheck!

I want candidates that take responsibility for developing their own professional skills beyond just the minimum. It’s great that you can meet today’s challenges, but I need people who will be ready for tomorrow’s projects too.

Needs training wheels:

You’ve only coded modules for existing applications, and maybe a few stand-alone tools or utilities for the server room. What you’ve done sounds impressive, but it doesn’t seem like you’ve ever built an application from the ground up. I’m not expecting that you’ve architected your own custom enterprise, multi-tenant ERP solution. But nowhere in the interview did you give me the impression that you’d ever even clicked “file –> new project” either.

So you can follow someone else’s patterns and conventions, and you can plug code into someone else’s framework. But I can’t trust you to code for problems beyond those that some pre-built framework anticipated.

If I give you a blank Visual Studio solution and a list of requirements, can you deliver a complete and high-quality product? Will it be coded to standards? Will the code be organized and maintainable?

Closed shop:

You’ve never build software for users outside your employer’s firewall, much less for the general public. The quality, reliability and usability needs of software for non-technical end-users is different. It requires a more disciplined approach, and involves skills that aren’t heavily used with purely internal projects.

If you can code for the public end-user, I am confident that you can easily code for my company’s internal users. The reverse is not true.

Line of business developers are particularly prone to this problem, since most LOBS are internal only. Internal users are more tolerant to errors, poor design can be offset by specialized user training, and reliability is bolstered by your control of both the server and client environments.

I need to know that you can code to higher standards when it is necessary.

Doesn’t know why:

You have a firm grasp of the tools and technologies your employers have used in the past, but when I ask, “why did your company choose to use X instead of Y?” I get nothing. You told me about that amazing widget you wrote — I liked that story — but when I ask you why you didn’t use a 3rd party widget instead, you can’t give me a valid business justification for the extra time and effort you spent on a custom solution.

You don’t have to be an expert in cost analysis or anything, but I need to know that you can make sound decisions about the technologies, platforms, and coding techniques that you’ll use to solve the challenges my company is facing. Choosing a technology just because it’s cool or popular isn’t always the best bet for business applications.

Entity Framework – Storing a complex entity as JSON in a single DB column

jsonDuring the development of TicketDesk 2.5, I came across an unusual case. I wanted to store a large chunk of my entity model as a simple JSON string, and put it into a single column in the database.

Here’s the setup:

I have an entity that encapsulates all of a user’s display preferences and similar settings. One of those settings is a complex set of objects that represents the user’s custom settings for list view UI pages. There can be many lists, each with separate settings. Some of the settings for a list contain collections of other objects, resulting in a hierarchy of settings that goes three levels deep

I didn’t want to represent these settings as a relational data model in the database though. Using EF’s standard persistence mapping conventions, this collection of settings ends up being spread across six  tables. The TSQL queries to access that data would be rather slow and cumbersome, and the relational model doesn’t add any value at all.

Instead, I just wanted to serialize out the entire collection of settings as a single JSON string, and store it in one column in the user settings table. At the same time though, I wanted the code behave as if this were just a natural part of my regular EF entity model.

The solution:

The solution was to use a complex type, with some fluent model binding magic, to flatten the hierarchy into a single column. The heirarchy itself is represented as a custom collection, with a bit of manual JSON serialization/deserialization built-in.

I got a pointer the right general direction from this SO post, which saved me a bunch of time when approaching this more advanced scenario.

First, let’s take a look at the root entity here:

public class UserSetting
    public string UserId { get; set; }

    public virtual UserTicketListSettingsCollection ListSettings { get; set; }

This is the only entity which will map to its own table in the DB. The ListSettings collection is the property I want persisted as JSON in a single column.

Here is the custom collection that will be stored:

public class UserTicketListSettingsCollection: Collection<UserTicketListSetting>
    public void Add(ICollection<UserTicketListSetting> settings)
        foreach (var listSetting in settings)

    public string Serialized
        get { return Newtonsoft.Json.JsonConvert.SerializeObject(this); }
            if (string.IsNullOrEmpty(value))

            var jData = Newtonsoft.Json.JsonConvert.DeserializeObject<List<UserTicketListSetting>>(value);

This is a collection type, and is inheriting generic Collection<T>. In this case, T is the UserTicketListSetting type — which is a standard POCO wrapping up all of the settings for all of the various list views in one place.

Some of the properties inside UserTicketListSetting contain collections of other POCOs. The specific details of what’s going inside those classes doesn’t matter to this discussion, just understand that it results in a hierarchy of related objects. None of the properties in that hierarchy are marked up with EF attributes or anything.

The only magic here is that we have a Serialized property, which manually handles converting from/to JSON. This is the only property that we want persisted to the database.

To make that persistence happen, we will make UserTicketListSettingsCollection an EF complex type, though not by using the [ComplexType] attribute. Instead, we’ll manually register this complex type via the fluent model builder API.

In the DB Context this looks like this:

protected override void OnModelCreating(DbModelBuilder modelBuilder)
        .Property(p => p.Serialized)

This just tells EF that UserTicketListSettingsCollection is a complex type, and the only property we care about is the Serialized property. If there were other properties in UserTicketListSettingsCollection, you would need to exclude them with something like:

modelBuilder.ComplexType<UserTicketListSettingsCollection>().Ignore(p => p.PropertyToIgnore);

And that’s all I needed to get EF to store this entire hierarchy as a single JSON column.

Using this model in code is just like using any other EF entity. I can query it with LINQ expressions, and SaveChanges on the DbContext updates the JSON data in DB just like any other entity. Even the generation of code-based migrations works as expected.

It took a LOT of experimentation and digging to figure out how to make this work, but the implementation is rather simple once you know how to approach the problem.

This also reflects the amazing power and flexibility of Entity Framework. EF can be extended to fit very advanced scenarios even when the designers didn’t anticipate them directly.

You can see the full implementation of this in TicketDesk 2.5. Currently, TD 2.5 is in alpha, so look to the develop branch in source control on CodePlex. You will find this example, as well as several variations in the TicketDesk.Domain assembly.

TicketDesk 2.5 – Coming soon!

TicketDesk-2.5While I’ve been working on TicketDesk 3, the code for TicketDesk 2 hasn’t been getting any younger. Since TD3 is still a ways out from a production release, I’ve decided to release a major overhaul of TicketDesk 2 in the meantime. The new TicketDesk 2.5 release will bring the technology stack of TD2 up to the latest version of Microsoft’s Web Platform. Several of the changes will be derived from code developed for TD3.

I am targeting the end of October for a beta version, with a final release in mid to late November (subject to change, as always).

Here are the major changes I have planned so far:

Remove AD Security:

This release will not support direct integration with Active Directory. This was a popular feature in TicketDesk 1 and 2, but it has also been a major problem area as well. Instead, TD 2.5 will only support AD indirectly, via federation (ADFS for example) or through integration with an external identity server (like Azure AD).

Modernized Local Security:

TicketDesk 2.1x still uses the ancient SqlMembership providers that shipped with .Net 2.0 back in 2005. Authorization and identity have come a very long way since then, so TD 2.5 will be upgraded to the newest Aspnet.Identity framework version. It will also provide on-screen management tools to help administrators migrate existing user accounts from their legacy TD 2.1x databases.

UI Changes:

TD2.1x was built on Asp.net MVC 2, with the original Asp.net view engine. This isn’t very well supported by recent versions of Visual Studio, and most developers have long since abandoned the old engine in favor of Razor. I don’t plan many major changes to the general UI’s behavior or appearance, but by re-implementing on Razor I can bring the project into compatibility with Visual Studio 2013 and current Asp.net coding standards.

There will be some minor changes to the UI. I will be removing some of the crusty old JQuery components, and updating the styles to take advantage of newer CSS features that weren’t widely supported when TD2 was first built.

Entity Framework Code-First:

TicketDesk 2.5 will move from Entity Framework 4 database-first models to Entity Framework 6 with a Code-First model. EF Migrations will provide ongoing schema management from here out, including the ability to migrate legacy TicketDesk 2.1x databases. Along with this change, TD 2.5 will also include on-screen database management utilities for migrating legacy databases, seeding demo/test data, and so on.

This refactoring will also bring TD 2.5 in line with the technologies backing TD 3, which will greatly simplify future upgrades.

Eliminate MEF:

The managed extensibility framework has continued to evolve, but it still isn’t a very good choice for Dependency Injection concerns in a web application. Instead, TD 2.5 will use Simple Injector for IoC. Some of the simplifications in the back-end should also reduce the reliance on dependency injection techniques quite a lot.

Improved Email Notifications:

Several improvements to Email Notifications are planned. Most of these are intended to give administrators greater control over how and when TicketDesk sends email notifications. This will include better on-screen testing and diagnostic tools for troubleshooting email related issues.

Multiple Projects:

TicketDesk 2.5 will support multiple projects, which let you handle tickets for different operations, projects, or products in isolation. You will be able to move tickets from one project to another, and can turn off multiple project support  entirely if you don’t need the functionality. I do not know yet if I’ll support user permissions on a per-project basis in this version, but TD3 will certainly provide that functionality.

Watch/Follow Tickets:

Users will be able to watch or follow tickets without having to be the ticket’s owner or assigned user. This will allow these users to receive notifications of changes as the ticket progresses.

Azure Deployments:

TicketDesk 2.5 will be deployable as an Azure WebSite. Currently, there are several issues that make deploying to cloud or farm environments tricky. The biggest are that the Lucene search indexes are stored on the file system, and the old SqlMembership providers are not compatible with the features provided by Azure SQL. These issues are not insurmountable for experienced .Net developers, but deployment to web farms or cloud providers is not currently an out-of-box capability of the system.

To make TicketDesk play well with cloud environments, a pluggable storage provider will be used for any features needing access to non-database storage. When deployed to single instance environments, TD 2.5 will use the file system, but you will be able to reconfigure it for Azure Blob storage when deploying to the cloud. Attachment storage will be moved out of the SQL database to the new storage provider as well.

The only hold-up for Azure SQL is the membership system, but the newer Aspnet.Identity framework fully supports EF migrations and is compatible with any EF data provider –including Azure SQL.

Early pre-alpha code is already committed to the CodePlex repository, and will be updated regularly as work continues. Right now, there isn’t much to see. I’m still working on the basic plumbing for database and identity management, so there are no TD specific user facing features yet. As soon as the back-end is shored up, I’ll start porting in TD specific UI features.

A demo version of the site will be available soon, hosted on Azure. I just have to workout a few minor details related to resetting the database and providing sample seed data, then I can open the demo to the public.

TicketDesk 3 Dev Diary – Angular it shall be!

which wayIt has been a while since I updated everyone on TicketDesk 3. I took a break to wrap up several other projects and to search for a new day job, but now I’m back to working on TicketDesk 3 again.

I haven’t been idle though. I spent much of this spring working with Asp.net identity, OWIN authentication middleware, and all the federated identity goodness — Azure AD, Azure Graph API, ADFS, Azure AD Sync, WsFederation, oAuth, OpenID Connect, etc.

I now have a decent grasp of what’s going on in the world of modern authentication and identity. There is a LOT happening in this space right now, and the Asp.net stack is right there on the bleeding edge of it all.

Unfortunately, documentation and guidance on how all the new security pieces fit together in real world apps is sparse, but I’ve learned enough now to be comfortable that I can get TD3 to handle multiple authentication scenarios.

Over the last several weeks, I’ve also been working deeply with Angular.js. I’ve dabbled in SPA’s in the past a bit, mostly on the Durandal, knockout.js, and breeze stack –which the current TD3 Alpha uses. This space has also been moving fast, and Angular in particular has been gaining traction like crazy.

For me, the impetus for examining Angular in more depth was the announcement that Durandal’s principal author had joined the Angular team, and that Durandal will merge with the next version of Angular (2.0). I really liked Durandal, but Angular is what all the cool kids are using –it just makes more sense to go with the tide, rather than stick with a soon-to-be-obsolete framework

After working with Angular.js a bit, I decided to go ahead and move TD3 over now, before the UI gets any larger or more complex.

So far, I’m enjoying Angular. It is a more complex platform than Durandal, so getting my head around it required significant re-training. But I’m finding that it is an amazingly productive platform.

As before, I’m basing the early TD3 platform on the courses from John Papa, and his Hot Towel Angular packages. Hot Towel made a good starting point for Durandal, and it makes an even better one for Angular. This time though, I’m not sticking as close to the Hot Towel provided UI bits. Instead, I’m working with a much more advanced theme from the wrap-bootstrap project.

Here’s the short-term plan:

  • Start with the Hot Towel Angular packages.
  • Mix in a gutted version of the new theme –leaving just the parts I intend to use.
  • Hook up Asp.Net text resources to i18Next and Angular’s internationalization filters.
  • Integrate client bearer token security with asp.net identity and OWIN middleware.
  • Setup sever-side Breeze and the Breeze.js library to move the data around.
  • Put the SignalR stuff back in.
  • Tackle production js/css minification
  • Clean it up, and package it as a starter kit
  • Build out the rest of TD3 on the new platform.

I have a couple of upcoming projects with platform needs, so this base-line starter kit will serve as a common ancestor . If it’s good enough, I might even make it a public nuget package or something.

I’ll commit code to github and codeplex as soon as I have the platform in a usable state, and can afford to buy the extended license I need to redistribute the theme I’m using.


Reddnet.net – Leaving Google Apps and DreamHost for Azure and Office 365

siteI’ve owned reddnet.net a long time. Back in the 90’s I hosted in my own basement data-center™.com, but eventually the costs became problematic. So, I switched to 3rd party hosting providers. Since then, I’ve bounced from provider to provider, never being satisfied with any of them.

My needs are simple. I have a custom domain, a little personal blog, and a few email accounts. I don’t want to spend a ton of cash on the services, nor much time on administrative tasks. At the same time though, this is the heart of my personal online identity. It needs to perform reliably.

A few years ago, after yet another of my hosting providers decayed into oblivion, I decided split my web and email hosting to different providers.

Email is the most painful service to move, so I decided to move it to Google apps. Google let non-corporate organizations, like me, host at Google Apps for free. They are a stable company, and handle email exceptionally well. So, I figured using Google might eliminate my biennial email migration hell.

For the web site, I chose DreamHost, one of the “premier” WordPress partners. Sadly, DreamHost just plain sucks. Their server performance is abysmal, and the network latency makes me wonder which African country hosts their data center –and if it’s powered by hamsters, or a dung-burning furnace. On the plus side, it is reasonably cheap. My blog isn’t exactly popular, so I could live with the sub-optimal service for a while.

In the years since that move, I’ve grown increasingly frustrated with Google. They killed off “free” Google Apps hosting. I’m grandfathered into the plan, but as new services roll out or old ones get upgraded, us free-loaders are last to see an update –if we get updated at all.

Clearly, they want us to buy into a business tier plan. I don’t mind paying for my services, as long as the services are worth it, but Google has given me serious doubts about the value of their services going forward.

Their war against Microsoft has put customers, like me, in the cross-fire. They killed active sync for gMail while sabotaging key APIs across their other services. They refuse to write native apps for Windows 8 or Phone 8 at all –which wouldn’t be bad if they didn’t also interfere with 3rd party apps that try to bring Google’s services to Microsoft’s platforms.

As a Microsoft developer, and Windows and Windows Phone user, Google’s services –especially the Google Apps services– are nearly useless outside a web browser.

The value of using modern web based software services is the ability for it to become an integral part of the entire computing experience –across all platforms, devices and applications. Google seems to disagree.

I’m not claiming Microsoft is an innocent victim here. Microsoft’s legal extortion of licensing revenue from android was a real dick move, for example. But Microsoft doesn’t put its customers on the front-line. Microsoft encourages apps for Apple and Google products, often writing their own native applications when necessary. They certainly never obstruct my ability to use one of their services just because they don’t like the device I chose. They don’t play games with their APIs to sabotage their products on other platforms.

So, it got to the point where I only had two good options. Pay for a subscription to Google apps, or pay for Office 365. My primary concern is making sure I have email services for my domain. The rest of Google Apps or Office 365 are just nice-to-have extras.

Aside from my reservations about Google’s commitment to open, cross-platform integration, what tipped the scales firmly towards a move to Office 365 was Microsoft Azure. Azure is the cloud services platform backing Office 365, in the same way that Google App Engine backs Google Apps.

A move to Office 365 implicitly sets up my domain in Azure, which gives me the opportunity to reunify my web and email services under one provider again. Better still, Azure is a platform that I understand and work with professionally on a regular basis.

I could have hosted my website on Google Apps Engine too, but honestly it isn’t a platform I understand well, and the setup for WordPress there is not painless. On Azure, you just pick WordPress from the web site gallery and it’s done –stupid easy.

Unlike my past hosting providers, Azure’s prices scale very smoothly based on usage. Hosting a simple WordPress site, like mine, costs about $14/month. This is slightly more than a traditional 3rd party WordPress provider, but it performs significantly better too.

And the best part is that, as a subscriber to the Microsoft Developer Network (MSDN), I get $100 a  month of credit to spend on Azure resources. This doesn’t count towards Office 365 licenses, but it effectively makes the web hosting free, and leaves plenty of credit for other projects.

On my old setup, Google was free, while DreamHost ran about $100 a year… and I was unhappy with both. After the switch, Azure is free, while Office 365 runs $120/year (because I need two licenses at $60/ea).

Bottom line — for an extra $20 a year, I get access to high-performance personal web hosting on a platform I know and trust, first-class email, and I regain the seamless service integration across my desktop and phone devices.


Entity Framework: It’s not a stack of pancakes!

I’ve been talking to a lot of line-of-business developers lately. Most have adopted newer technologies like Entity Framework, but many are still working with strictly layered application designs. They’ll have POCO entities, DbContexts, DbSets, and all the modern goodness EF brings. Then they smash it all down into a data access layer, hiding EF behind several layers of abstraction. As a result, these applications can’t leverage EF’s best features, and the entire design becomes very cumbersome.

I blame n-tier architectural thinking!


Most senior .Net developers cut their teeth on .net during a time when n-tier was a pervasive discussion across the industry. Every book, article and classroom lecture about web application design included a long discussion on n-tier fundamentals. The n-tier hype faded years ago, replaced by more realistic lego-block approaches, but n-tier inspired conceptualizations still exert a strong influence over modern designs.

I call it pancake thinking — conceptualizing the logical arrangement of an application as a series of horizontal layers, each with distinct, non-overlapping areas of responsibility. Pancake thinking pins Entity Framework as a DAL technology –just another means to get data in and out of a database –a fundamental misunderstanding of EF’s intended role.

Here is a diagram of a full blown n-tier approach applied to an ASP.NET MVC application using Entity Framework:

EF N-Tier Diagram

Note that all of entity framework is buried at the bottom. An object mapper is probably transforming EF’s entities into business objects. So, by the time information leaves the DAL, EF has been beaten completely out of it. EF acts only as a modernized version of traditional ADO.NET objects. EF is a better experience for DAL developers, but it doesn’t add value for those writing code further up the stack.

I don’t see as many strictly layered designs like this in the wild, though I have come across a few recently. Most of the current literature around EF advocates a somewhat hybrid approach, like this:

EF n-Tier Alternate Design

In this design, EF’s POCOs roam around the business and presentation layers like free-range chickens, but DbContext and DbSets are still being tortured in the basement.

What I dislike about this design is that EF has been turned on its head. The DbContext should be at the top of the stack, acting as the entry point through which you interact with entities.

So, how should a modern EF driven application be designed?

EF’s POCO entities, along with DbContext and DbSets, work best with a behavior driven approach to application design. You design around the business-behaviors of your application, largely without concern for what the database looks like. Instead of a horizontally segmented architecture, EF becomes a pervasive framework used throughout the entire application at all levels. If you are worried about mixing “data-logic” with “business-logic”, then you are still thinking about pancakes. Stop it!

Here is another diagram, this time letting Entity Framework do it’s thing, without any unnecessary abstractions:

EF Domain Model

Notice first, how much simpler the design becomes. We’ve elevated EF to a true business framework, and we’ve eliminated tons of abstractions in the process. We still have n-tier going on, so relax! We just didn’t logically split data and business code like you might be used to.

This is a domain-model architecture, and borrows some of the general ideas of “domain driven design” (DDD). If you are a DDD purist, please do not write me hate mail. What I’m proposing here is not intended to be a true DDD design. If you are interested in DDD in depth, check out Vaughn Vernon’s site. For discussions on practical DDD with .Net and EF, Vaughn’s TechEd presentations are a must watch. 

To understand how this design works, let’s explore the three main concepts behind Entity Framework.

EF Entity Model:

The heart of EF is the entity model. At its simplest, the model is just a bunch of data transfer objects (DTOs) –together they form an in-memory model mirroring the database’s structure. If you generate a model from an existing database, you get this shape. This is also what happens if you design a code-first model using the same thinking you’d use to design a relational database. This “in-memory database” viewpoint is how most n-tier applications tend to use EF models.

That simplistic approach doesn’t leverage EF’s power and flexibility. It should be a true business domain model, or something close. Your entities contain whatever code is appropriate for their business function, and they are organized to best fit the business requirements.

The nice thing about a business-oriented model is that code operating against the entities feels very natural to object oriented programmers. You don’t concern yourself with how the actual persistence is done; EF takes care it for you. This is exactly the level of abstraction n-tier designs strive for, but EF gives you the same result without the rigid, horizontal layering.

Persistence mapping:

The logical entity design should be based on business behavior, but the actual code implementation does require you to understand how EF handles persistence.

Persistence details do place some constraints on the kinds of OOP acrobatics you can employ in your model, so you need to be aware of how it works. Overall though, the constraints are mild, and shouldn’t keep you from an implementation that remains true to the intent of the business-centric design.

Many properties on your entities will need to be persisted to the database. Others may exist only to support runtime behaviors, but aren’t persisted. To figure out how, or if, properties map to the database, EF uses a combination of conventions and attribute annotations. EF doesn’t care about your entity’s other methods, fields, events, delegates, etc. so you are free to implement whatever business code you need.

EF does a good job of automatically inferring much of a model’s mapping from code conventions alone. If you use the conventions appropriately you can get a head-start on your persistence mappings –no code needed. For properties that need more explicit definitions, you use attributes to tell EF how to interpreted them.

For really advanced cases, you can hook into the model builder’s fluent API. This powerful tool lets you define tricky mappings that attributes and conventions alone can’t describe fully. If your model is significantly dissimilar from your database’s structure, you may spend a lot of time getting to know the model builder –but it’s easy to use, and amazingly powerful.

While you will need to understand EF persistence issues, you only need to concern yourself with them when you implement the entity model. For code using that model, these details are highly transparent –as they should be.

Repositories and Business Services:

The final piece of EF is the part so many people insist on hiding –the DbContext and DbSets. If you are thinking in pancakes, the DbContext seems like a hub for accessing the database. N-tier principals have trained you to hide data access from other code, at all costs!

Typically, n-tier type abstractions take the form of custom repositories layered on top of entity framework’s objects. Only the repositories may instantiate and use a DbContext, while everything at higher layers must go through the repositories.

A service or unit-of-work pattern is usually layered on top of the custom repositories too. The service manages the repositories, while the repositories manage EF’s DbContext and DbSets.

If you’ve ever tried to layer an n-tier application on EF like this, you probably found yourself fighting EF all over the place. This abstraction is the source of your pain.

An EF DbContext is already an implementation of a unit-of-work design pattern. The DbSet is a generic repository pattern. So you’ve just been layering custom unit-of-work and repositories over top of EF’s unit-of-work and repositories. That extra abstraction doesn’t add much value, but it sure adds a lot of complexity.

Ideally, the DbContext should be a root business service. The most important thing to understand is that this belongs at the top of the business layer, not buried under it.

Your entities directly contain the internal business logic appropriate to enforce their behaviors. Similarly, a DbSet is where you put business logic that operates against the set of an entity type. Anything that you’d normally put in custom repositories can be added to the real DbSet instead through extension methods.

Extension methods let you extend a DbSet on the fly. They are fantastic for dealing with business context specific concerns, and you can have a different set of extension methods for each of your business contexts. The extension methods can be arranged by namespace, and can also be defined in assemblies higher in the stack –in the latter case, the extension may have access to dependencies within the higher layer assembly that would not be appropriate to couple directly to your business layer. For example, an extension method in an asp.net web application can depend on HttpContext, but you would never want to create a dependency like that directly in the business domain.  Calling code can just chose which extensions are appropriate, and import/use those namespaces, while ignoring extension methods from other contexts.

For cross-cutting concerns that span multiple entity types, you can extend the DbContext itself. A common approach is to have multiple concrete roots for each of your business contexts. The business specific roots will inherits a common DbContext base class. The base class contains EF specific stuff. Factory and adapter patterns often appear in relation to these roots as well, but the key concept is that each top-level service derives from a real DbContext… your calling code has all the LINQ and EF goodness at its disposal.

If you embrace EF’s DbContext as a top-level business service, either directly or through inheritance, then you will find EF can be a very pleasant experience. You are able to leverage its full power at all layers of your application, and friction with EF’s internals disappear. Using custom abstractions of your own, it is hard to reach this level of fluidity.

The Domain Model you’ve read about online:

If you go online and read recent articles about ASP.NET application designs, you’ll find many advocates of domain model designs. This would be great, except that most of them still argue for custom unit-of-work and repository patterns.

The only difference between these designs, and the hybrid n-tier layering design I described before, is that the abstractions here are a true part of the business layer, and are placed at, or near, the top of the stack.

EF Testable Domain Model

While these designs are superior to pancake models, I find the additional custom abstraction is largely unnecessary, adds little value, and usually creates the same kinds of friction you see in the n-tier approaches.

The reason for the extra layer of abstraction seems to have two sources. Partially it comes from that legacy n-tier thinking being applied, incorrectly, to a domain model. Even though it avoids full layering, the desire for more abstraction still comes from the designer’s inner-pancake.

The bigger force advocating for extra abstractions comes from the Test Driven Development crowd. Early versions of EF were completely hostile to testing. It took insane OOP acrobatics and deep abstractions even to get something vaguely testable.

In EF 4.1, code-first was introduced. It brought us the first versions of DbContext and DbSets, which were fairly friendly towards unit testing. Still though, dependency injection issues usually made an extra layer of abstraction appealing. These middle-versions of EF are where the design I’ve just diagrammed came from.

In current EF versions (6 or higher), DbContext and DbSets are now pervasive throughout all of EF. You can use them with model-first, database-first, and code-first approaches. You can also use them with POCOs, or with diagram generated entities (which are still POCOs in the end). On the testability front, EF has added numerous features to make native EF objects easily testable without requiring these layers of custom abstraction.

You can, through a bit of experimentation, learn how to write great unit tests directly against concrete instances of DbContext and DbSet –without any runtime dependency on the physical database.

How to achieve that level of testability in your model is a topic for another post, but trust me… you don’t need custom repositories for testing EF anymore. All you need is to be smart about how you compose your code, and maybe a little help from a mocking framework here and there.

I’ve kept most of this discussion pretty high-level. Hopefully it will help expand how you view EF based application designs. With some additional research, you should be able to take these ideas and turn them into real code that’s relevant to your own business domain.


Evolving Custom Applications into Resalable Products (expanded edition)



You’ve been commissioned to build a custom application for a particular client. In addition, you or the client wants the option of putting the application up for resale when it’s done. 

How should you approach this kind of software project?

Custom software takes a lot of time, and is very expensive for the client. They will sink a lot of money into design and development long before they even see the product, so it isn’t unusual to be asked to design for resale. This is a smart move. Reselling a custom application can help recoup the massive up-front costs, and it might even open up a whole a new market.

If you get asked to write software like this, do yourself a favor… expect that the initial client is the only client you will ever have. Create an amazing product for the current paying client, and ignore any potential mass-market opportunities for now. If you are successful with your fist client, only then should you start seriously worrying about other clients.

Once you have an initial product, you will almost certainly have to redesign and refactor in order to make it ready for the mass-market, so why not just design for the mass-market from the beginning?

Simple… risk and cost.

A mass-market application needs a robust and extensible architecture, more user and developer documentation, a full suite of built-in admin and configuration tools, installers, and support for multiple runtime and database platforms. It may also need localization, theming, and customization features far beyond what your initial client needs. All of this extra stuff takes time and effort, but has little value to the initial client. If you waste their time and money building to support other customers, then you will under-serve the customer you have now. Worse, you don’t even know if the application will be successful, or if it has the mass-appeal you imagine it does.

Getting to version 1.0:

Use an agile process. You don’t have to use a formal methodology, but stick to the agile basics. Code the bare minimum you can get away with, deploy it to your paying customer for testing, get their feedback, then go back and code some more. Repeat until you have all of the 1.0 requirements implemented.

If you are doing this under contract, I suggest that you price your deliverable in iterations too. Embrace your customer’s tendency to introduce feature creep, and shift the requirements as you go. Be eager to change the design, just make sure you have a good understanding (and a contractual agreement) around change management –make sure you are getting paid for scope increases, no matter how small they are at first. Once you start doing extras for free, it will be difficult to charge for bigger change requests later.

Brutally cut everything you can from the client’s initial requirements list. Keep it simple, and implement only the bare essentials. Your client will insist that some fluffy features are “essential”. Ignore them and cut them from your implementation plan. You don’t have to tell the client what you’ve cut, just tell them you have their pet features scheduled for “a later iteration” –even if you don’t. As you deploy iterations towards a 1.0 release, your client will forget about many of those fancy features. Over time you will also get a better feel for what is truly important to the application’s success. Don’t waste time writing a bunch of low-value glitz that isn’t necessary –save that stuff for later versions.

Make absolutely sure that every user-facing feature you implement is amazing. It should be pretty, function smoothly, perform well, and be feature complete. It is better to tell the client that you haven’t implemented the user manager than to show them an ugly or incomplete user manager. Everything they see on screen, ever, should look and act like a finished product. Never demo anything that might disappoint.

Avoid over-architecting. It is tempting to layer and componentize everything, follow all the best practices, use the all the recommended design patterns, and leverage all those fancy OOP techniques you’ve been reading about.

Don’t do it!

Deliver the necessary features to get the job done, and do it as fast as possible. Where an advanced technique or best practice would reduce your own efforts, go for it! But, if you can’t say exactly how a chosen architecture moves the application towards the finish-line, scale it back.

High-coverage unit testing may be super trendy right now, but carefully consider how much testing you should commit to. Unit tests do not meet any of your application’s actual business requirements. Unit tests, collectively, are a complete application in their own right. They have to be designed, coded, maintained, and documented just like code for any other application. If you do heavy testing, you are effectively building a second application alongside the first, but are only being paid for one of them. That doesn’t mean you shouldn’t do unit testing, even high-coverage testing. But you must be as aware of the costs of testing as you are the benefits –especially when considering a test-first methodology like test driven development (TDD) or behavior driven development (BDD).

If you are working in a dynamic language, high-coverage testing is almost always a good idea. These languages tend to have weak design and compile time validation, and their code analysis tools are often quite limited. High-coverage tests compensate for the lack of compile time checking and weak code analysis tools. Also, in dynamic platforms, unit tests are usually easy to write, and have very low maintenance overhead. You won’t have to spend much effort on abstract architectural acrobatics, and you probably won’t need a lot of test-specific infrastructure. So test away!

For static language environments, the benefits vs. costs of high-coverage testing is much more complex. Some testing is a given in any project, but should you go for high-coverage testing, or a full blown test-oriented development methodology like TDD?

Unit tests have more value with larger projects and larger teams. It also has more value if your team has a mix of developers at different experience levels. For experienced developers and smaller projects, you often get better results just using good code-analysis tools in conjunction with compiler validation. In these cases, you can reserve unit tests for code routines that really need them –complex, tricky and critical code segments.

Also consider that high-coverage testing is much harder in static language environments. Often testing necessitates horribly complex architectural designs that wouldn’t be necessary otherwise. For non-trivial applications, you’ll also end up with a lot of test-specific infrastructure, mocking frameworks, and the need to develop deep expertise with the testing tools and technologies. So, commit to high-coverage testing only if you are really sure that the benefits to the development team are really worth the costs.

You should move 1.0 forward without much consideration for the mass-market, one concession you should make is to avoid 3rd party code libraries where you can. This includes 3rd party frameworks as well as packaged UI component suites. If you cannot produce an essential feature with built-in components, or can’t write your own easily, then a 3rd party component may be necessary. Try to stick to open source components if you can. Even though you aren’t worried about the retail market yet, you don’t want to trap yourself into dependencies on 3rd party technologies that you can’t legally redistribute, or that increase the price you will have to charge for your retail product.

Don’t worry about making everything configurable via online admin tools in version 1.0. Sure, it is nice to give your initial client built-in admin tools, but it adds complexity and time. Admin tools are not a frequently used feature-set, so they offer a lower return on investment. To the extend you can, stick to config files or manual database entry. Fancy GUI tools can wait until the core application has proven itself successful enough to justify the additional investment in management tools.

Stick to the simplest data access mechanism that meets your needs, and don’t worry about supporting multiple database platforms. Pick a database platform that the paying client can live with, and don’t look back.

In version 1.0, the most important feature set is likely going to be reporting. If your application has any reporting needs, and most do, you need to be sure you start with reports that dazzle your client from day one. Be sure the reports are pretty, highly interactive (sorting, filtering, paging, etc.), and are highly printable. Reporting is the part of your application that your client’s management will use the most, and they hold the purse strings. Knock them dead with killer reports!

Later, if you get to the retail market, reporting will also be the feature that makes or breaks most new sales.

After version 1.0: getting to the retail market:

I can’t tell you how to find and convince other people to buy your amazing application. What I can tell you is this: if 1.0 makes your initial customer happy, then you have an application that will probably appeal to other clients too.

You can probably make a few minor adjustments to the 1.0 product, and deliver it to other clients as-is. Get it into the hands of as many clients as you can, with as few changes as you can, as soon as you can.

Wait? Didn’t I say earlier that mass-market products required tons of extra stuff? Admin tools, documentation, installers, and all that?

Well, I lied. You will want that stuff for version 2.0 of course, but you can usually take a successful 1.0 product to market, even if it is still rough. Sure, it lacks customization, has crappy documentation, and poor admin tools… but if it works well enough for one client, it probably works well enough for at least a few others. So don’t wait on version 2.0, go ahead and try to get 1.0 out there now! 

Why the rush?

You want to recoup the sunk costs of the initial development as soon as possible.

You need as many real clients as you can get so you can gauge if there are additional requirements that need to be addressed in the next version.

You can learn from potential clients that decline to buy your application. Ask them, politely, why they passed on your product. Is it too expensive? Do they have a better competing product already? Does your app lack essential features?

So… what about version 2.0?

If 1.0 was successful with your paying customer, you will likely be heading towards a 2.0 product even if you don’t have a retail market –your initial client will still want improvements. If you can’t reach a larger market soon after 1.0 is done, consider abandoning the larger market. This will keep the costs low for your one paying customer, and will reduce the scope of 2.0 considerably. Concentrate on just enhancing the application around your one customer’s more ambitious needs, and shore up the weaknesses in your initial design.

If there is a retail market buying your product, then keep in mind that developers may be part of your customer base. Developers may need to extend or interoperate with your application, so make sure 2.0 has good developer documentation, consistent APIs, and web service APIs.

Version 2.0 should mostly be about fixing up any sloppy code in 1.0 and improving the design to support those features you cut from the client’s 1.0 wish-list. Re-design, re-architect, and re-code the core framework to support future enhancements going forward. You probably aren’t quite ready for a lot of new user-facing features yet, so save most of those for versions after 2.0.

Version 2.0 is where you start thinking about extensibility, customization, internationalization, and advanced administration. This is also the time to give scalability more serious thought. What about really large customers with lots of data? Are you going to support multi-server or cloud deployments? Should you aim to provide the product as a software service, instead of a packaged product?

This is also the time to consider if the application needs interoperability with other products… perhaps you need to support data imports for customers migrating from a competing product, or using your product in conjunction with a larger suite of products.

This is also the time to reconsider your development team itself. Do you need to contract outside resources, or hire more people? Do you need visual and graphic designers? Do you need code optimization or big-data specialists? What about hiring a marketing firm? How about payment, billing, and sales?

One thing you should consider not doing, in any version, is supporting multiple database or server platforms. Pick the platform that makes the most sense given the development platform. The odds are very good that most clients can deal with your choice, even if they prefer another database or server platform. Is the cost of developing for multiple platforms going to generate enough new sales to pay for itself? Probably not. You’d be better off offering your product as a cloud service, than trying to deal with multi-platform applications. The exception is mobile applications, where you should certainly support multiple platforms right from the start.

And that’s really all I have to offer… once you get to designing version 2.0, you will know better than I do what you should focus on.