Evolving Custom Applications into Resalable Products (expanded edition)

map

 

You’ve been commissioned to build a custom application for a particular client. In addition, you or the client wants the option of putting the application up for resale when it’s done. 

How should you approach this kind of software project?

Custom software takes a lot of time, and is very expensive for the client. They will sink a lot of money into design and development long before they even see the product, so it isn’t unusual to be asked to design for resale. This is a smart move. Reselling a custom application can help recoup the massive up-front costs, and it might even open up a whole a new market.

If you get asked to write software like this, do yourself a favor… expect that the initial client is the only client you will ever have. Create an amazing product for the current paying client, and ignore any potential mass-market opportunities for now. If you are successful with your fist client, only then should you start seriously worrying about other clients.

Once you have an initial product, you will almost certainly have to redesign and refactor in order to make it ready for the mass-market, so why not just design for the mass-market from the beginning?

Simple… risk and cost.

A mass-market application needs a robust and extensible architecture, more user and developer documentation, a full suite of built-in admin and configuration tools, installers, and support for multiple runtime and database platforms. It may also need localization, theming, and customization features far beyond what your initial client needs. All of this extra stuff takes time and effort, but has little value to the initial client. If you waste their time and money building to support other customers, then you will under-serve the customer you have now. Worse, you don’t even know if the application will be successful, or if it has the mass-appeal you imagine it does.

Getting to version 1.0:

Use an agile process. You don’t have to use a formal methodology, but stick to the agile basics. Code the bare minimum you can get away with, deploy it to your paying customer for testing, get their feedback, then go back and code some more. Repeat until you have all of the 1.0 requirements implemented.

If you are doing this under contract, I suggest that you price your deliverable in iterations too. Embrace your customer’s tendency to introduce feature creep, and shift the requirements as you go. Be eager to change the design, just make sure you have a good understanding (and a contractual agreement) around change management –make sure you are getting paid for scope increases, no matter how small they are at first. Once you start doing extras for free, it will be difficult to charge for bigger change requests later.

Brutally cut everything you can from the client’s initial requirements list. Keep it simple, and implement only the bare essentials. Your client will insist that some fluffy features are “essential”. Ignore them and cut them from your implementation plan. You don’t have to tell the client what you’ve cut, just tell them you have their pet features scheduled for “a later iteration” –even if you don’t. As you deploy iterations towards a 1.0 release, your client will forget about many of those fancy features. Over time you will also get a better feel for what is truly important to the application’s success. Don’t waste time writing a bunch of low-value glitz that isn’t necessary –save that stuff for later versions.

Make absolutely sure that every user-facing feature you implement is amazing. It should be pretty, function smoothly, perform well, and be feature complete. It is better to tell the client that you haven’t implemented the user manager than to show them an ugly or incomplete user manager. Everything they see on screen, ever, should look and act like a finished product. Never demo anything that might disappoint.

Avoid over-architecting. It is tempting to layer and componentize everything, follow all the best practices, use the all the recommended design patterns, and leverage all those fancy OOP techniques you’ve been reading about.

Don’t do it!

Deliver the necessary features to get the job done, and do it as fast as possible. Where an advanced technique or best practice would reduce your own efforts, go for it! But, if you can’t say exactly how a chosen architecture moves the application towards the finish-line, scale it back.

High-coverage unit testing may be super trendy right now, but carefully consider how much testing you should commit to. Unit tests do not meet any of your application’s actual business requirements. Unit tests, collectively, are a complete application in their own right. They have to be designed, coded, maintained, and documented just like code for any other application. If you do heavy testing, you are effectively building a second application alongside the first, but are only being paid for one of them. That doesn’t mean you shouldn’t do unit testing, even high-coverage testing. But you must be as aware of the costs of testing as you are the benefits –especially when considering a test-first methodology like test driven development (TDD) or behavior driven development (BDD).

If you are working in a dynamic language, high-coverage testing is almost always a good idea. These languages tend to have weak design and compile time validation, and their code analysis tools are often quite limited. High-coverage tests compensate for the lack of compile time checking and weak code analysis tools. Also, in dynamic platforms, unit tests are usually easy to write, and have very low maintenance overhead. You won’t have to spend much effort on abstract architectural acrobatics, and you probably won’t need a lot of test-specific infrastructure. So test away!

For static language environments, the benefits vs. costs of high-coverage testing is much more complex. Some testing is a given in any project, but should you go for high-coverage testing, or a full blown test-oriented development methodology like TDD?

Unit tests have more value with larger projects and larger teams. It also has more value if your team has a mix of developers at different experience levels. For experienced developers and smaller projects, you often get better results just using good code-analysis tools in conjunction with compiler validation. In these cases, you can reserve unit tests for code routines that really need them –complex, tricky and critical code segments.

Also consider that high-coverage testing is much harder in static language environments. Often testing necessitates horribly complex architectural designs that wouldn’t be necessary otherwise. For non-trivial applications, you’ll also end up with a lot of test-specific infrastructure, mocking frameworks, and the need to develop deep expertise with the testing tools and technologies. So, commit to high-coverage testing only if you are really sure that the benefits to the development team are really worth the costs.

You should move 1.0 forward without much consideration for the mass-market, one concession you should make is to avoid 3rd party code libraries where you can. This includes 3rd party frameworks as well as packaged UI component suites. If you cannot produce an essential feature with built-in components, or can’t write your own easily, then a 3rd party component may be necessary. Try to stick to open source components if you can. Even though you aren’t worried about the retail market yet, you don’t want to trap yourself into dependencies on 3rd party technologies that you can’t legally redistribute, or that increase the price you will have to charge for your retail product.

Don’t worry about making everything configurable via online admin tools in version 1.0. Sure, it is nice to give your initial client built-in admin tools, but it adds complexity and time. Admin tools are not a frequently used feature-set, so they offer a lower return on investment. To the extend you can, stick to config files or manual database entry. Fancy GUI tools can wait until the core application has proven itself successful enough to justify the additional investment in management tools.

Stick to the simplest data access mechanism that meets your needs, and don’t worry about supporting multiple database platforms. Pick a database platform that the paying client can live with, and don’t look back.

In version 1.0, the most important feature set is likely going to be reporting. If your application has any reporting needs, and most do, you need to be sure you start with reports that dazzle your client from day one. Be sure the reports are pretty, highly interactive (sorting, filtering, paging, etc.), and are highly printable. Reporting is the part of your application that your client’s management will use the most, and they hold the purse strings. Knock them dead with killer reports!

Later, if you get to the retail market, reporting will also be the feature that makes or breaks most new sales.

After version 1.0: getting to the retail market:

I can’t tell you how to find and convince other people to buy your amazing application. What I can tell you is this: if 1.0 makes your initial customer happy, then you have an application that will probably appeal to other clients too.

You can probably make a few minor adjustments to the 1.0 product, and deliver it to other clients as-is. Get it into the hands of as many clients as you can, with as few changes as you can, as soon as you can.

Wait? Didn’t I say earlier that mass-market products required tons of extra stuff? Admin tools, documentation, installers, and all that?

Well, I lied. You will want that stuff for version 2.0 of course, but you can usually take a successful 1.0 product to market, even if it is still rough. Sure, it lacks customization, has crappy documentation, and poor admin tools… but if it works well enough for one client, it probably works well enough for at least a few others. So don’t wait on version 2.0, go ahead and try to get 1.0 out there now! 

Why the rush?

You want to recoup the sunk costs of the initial development as soon as possible.

You need as many real clients as you can get so you can gauge if there are additional requirements that need to be addressed in the next version.

You can learn from potential clients that decline to buy your application. Ask them, politely, why they passed on your product. Is it too expensive? Do they have a better competing product already? Does your app lack essential features?

So… what about version 2.0?

If 1.0 was successful with your paying customer, you will likely be heading towards a 2.0 product even if you don’t have a retail market –your initial client will still want improvements. If you can’t reach a larger market soon after 1.0 is done, consider abandoning the larger market. This will keep the costs low for your one paying customer, and will reduce the scope of 2.0 considerably. Concentrate on just enhancing the application around your one customer’s more ambitious needs, and shore up the weaknesses in your initial design.

If there is a retail market buying your product, then keep in mind that developers may be part of your customer base. Developers may need to extend or interoperate with your application, so make sure 2.0 has good developer documentation, consistent APIs, and web service APIs.

Version 2.0 should mostly be about fixing up any sloppy code in 1.0 and improving the design to support those features you cut from the client’s 1.0 wish-list. Re-design, re-architect, and re-code the core framework to support future enhancements going forward. You probably aren’t quite ready for a lot of new user-facing features yet, so save most of those for versions after 2.0.

Version 2.0 is where you start thinking about extensibility, customization, internationalization, and advanced administration. This is also the time to give scalability more serious thought. What about really large customers with lots of data? Are you going to support multi-server or cloud deployments? Should you aim to provide the product as a software service, instead of a packaged product?

This is also the time to consider if the application needs interoperability with other products… perhaps you need to support data imports for customers migrating from a competing product, or using your product in conjunction with a larger suite of products.

This is also the time to reconsider your development team itself. Do you need to contract outside resources, or hire more people? Do you need visual and graphic designers? Do you need code optimization or big-data specialists? What about hiring a marketing firm? How about payment, billing, and sales?

One thing you should consider not doing, in any version, is supporting multiple database or server platforms. Pick the platform that makes the most sense given the development platform. The odds are very good that most clients can deal with your choice, even if they prefer another database or server platform. Is the cost of developing for multiple platforms going to generate enough new sales to pay for itself? Probably not. You’d be better off offering your product as a cloud service, than trying to deal with multi-platform applications. The exception is mobile applications, where you should certainly support multiple platforms right from the start.

And that’s really all I have to offer… once you get to designing version 2.0, you will know better than I do what you should focus on.

Die Twitter!

Starting last summer, my twitter account started getting hijacked to send out marketing spam. No biggie, I thought, I’ll just change the password. A few days later, hacked again… then again… then again…

I unhooked Twitter integration from all my other sites and services, pulled all the app permissions, and even wrote a quick app to generate a random 14 character super-strong password (mixed case, special characters, numbers, etc.)

Then… yup! Hacked again.

Twitter would catch on to the hack each time, after my account started spamming random adverts all over the place. They’d disable the account, and email me to change the password.

I don’t use Twitter much. I didn’t have their native apps installed on my phone, nor my computer. I don’t even like Twitter very much. So eventually I just decided to leave the account disabled, figuring that eventually Twitter would tighten up their security.

Then a couple of days ago, I got email telling me the account had been hacked again. I never had re-enabled it after that last hack, but sure enough, when I went to the site I could login using my random generated password no problem.

So that’s it… I’ve killed my twitter account.

I have more than 140 characters of ideas about what Twitter can do with themselves.

 

TicketDesk 3 Dev Diary – Update: One AspNet and Aspnet.Identity

Since most of what I’m working on now is being done in my private repository, I wanted to give everyone a quick update on TicketDesk 3′ s progress.

I’m still working on TD3, but late in the fall it became apparent that I needed to wait on the RTM version of Visual Studio 2013 and the new Asp.net Web API 2 and Identity bits. Now that this stuff has all been released, and most of the dependencies have caught up, I have resumed work on TD3.

The main challenge is incorporating the new Aspnet.Identity framework. I am thrilled that Microsoft finally replaced the old security providers, but the transition to Aspnet.Identity is not all roses. The new framework is not as well documented as I’d like, and guidance on advanced uses is still thin. It is also fairly complex framework that requires a good bit of manual coding, especially when used in conjunction with a SPA front-end. Fortunately for me, Yago Pérez Vázquez has created a template project called DurandalAuth that does exactly what I’ve been trying to do with TicketDesk…. combine the bleeding edge versions of Aspnet.Identity and Web API 2 with the bleeding edge versions of Durandal, Breeze, and Bootstrap.

In fact, his template is so good, that I’m pretty much building TicketDesk 3 on top of his template instead of just porting the authentication stuff into my existing TD3 project… and this is why the project hasn’t been pushed into the public repository just yet… it’s a new project that doesn’t yet have all the features from the old one.

There are a few things about the DurandalAuth template that I’m not so sure about; the use of StructureMap instead of Ninject for IoC, and fact that he’s layered the back end with a custom repository and unit of work pattern… but overall, the design is generally the same as what I had been designing for TD3; except that he’d also implemented the new asp.net identity bits. The template also includes some SEO stuff that isn’t relevant to TicketDesk 3, though I may leave it there in case people want to make use of it.

At present, I’m in the process of combining the new project with the code I’ve already written for TD3, and adapting the design to TD3’s particular needs (internationalization for example). This will take a few more weeks, but once I’m done I will be able to push the new project to the public GitHub repository for everyone else to look at.

The technology stack for TD3 is now complete; and includes the following:

  • Asp.net Web API 2
  • Durandal 2 & Breeze
  • i18Next
  • Aspnet.Identity
  • Bootstrap 3
  • SignalR 2

The only major design element that I’ve yet to work out completely is using ACS and/or ADFS security. The identity implementation for Web API 2 uses a different mechanism (organization accounts) for integrating with ACS and ADFS; so I’ll have to find a way to smash the two options together; and provide enough internal abstractions to where either configuration is possible with minimal additional code.

I called it – Ballmer leaves Microsoft

Just over a year ago, I wrote a review of Windows 8, based on the release preview version that shipped last summer. At the end of that piece, I predicted that Steve Ballmer would be forced out as CEO in 2013. It turns out I was right. Ballmer has announced his pending resignation.

This is a perfect time for Ballmer to leave. It’s been long enough since Surface RT and Windows 8’s rocky releases for Ballmer to take most of the negativity around the company’s market missteps into retirement with him. The next big release cycle is still reasonably far off. If it’s a good release, the new CEO will be able to take all the credit for it, and if it isn’t so good then it can still be blamed on Ballmer.

Poor guy. I’ve met Mr. Ballmer, though I’m not familiar enough to comfortably call him Steve. Factually speaking, Microsoft did very well under his leadership. It grew market share, expanded into new markets, and maintains a healthy bottom-line financially. As a person, the most striking thing about Ballmer is that he is a true believer. He believes in Microsoft as a company, he believes in its products, and he believes in its mission. But what impresses me the most, is that he has always believed that Microsoft could be better and do more –he always moved forward with genuine optimism and enthusiasm.

It’s tragic his last act as CEO will be to take all the blame for all the company’s faults. That, my friends, is the kind of self-sacrifice worthy of respect. It will give Microsoft a chance to turn its image around, and to regain its footing in its troubled consumer and mobile segments.

I just hope that whoever takes his place understands what Ballmer’s exit means for the company, and can capitalize on the opportunity. Microsoft still has all the financial, legal, and technical tools that it needs to right itself… all it needs is someone with vision enough to get it done.

BTW Microsoft, I’m open to new opportunities. If you have trouble locating a suitable replacement CEO, just give me a call. I can fix it.

TicketDesk 3 Dev Diary – MEF, IoC, and Architectural Design

TicketDesk 2 and TicketDesk 3 have some key architectural differences. Both enforce a strict separation of concern between businesses and presentation layers, but there are major architectural differences within each layer. In this installment, I’d like to talk about how the back-end architecture will evolve and change.

TicketDesk 2 – Decoupled design:

The most significant technology that shaped TicketDesk 2’s class library design was the use of the Managed Extensibility Framework (MEF). The use of MEF in TicketDesk 2 was not about modularity, at least not in a way that is relevant to business requirements. TicketDesk 2 was never intended to support plug-ins or dynamic external module loading. I used MEF for two reasons; I was giving test driven development (TDD) another shot, and I had planned to write a Silverlight client for TicketDesk 2.

MEF was originally built by the Silverlight team. It had a lot of potential for other environments, but didn’t play well with MVC back then. It took some dark magic and hacking to just make it work there. MEF is an extensibility framework first, but an IoC container only by accident. While MEF can do the job of an IoC container, it wasn’t particularly good in that role.

As an extensibility framework, MEF actually has more in common with require.js than traditional server-side IoC frameworks. As a Silverlight technology, the primary purpose was to enable clients to download executable modules from the server on demand when needed. This is exactly what require.js does for JavaScript in HTML applications. The truly interesting thing is that TicketDesk 2 did not use MEF in this way at all. Asp.Net MVC is a server-side environment following a request-response-done type execution flow. Deferred module loading isn’t relevant in that kind of environment. TicketDesk used MEF only for its secondary IoC features — runtime composition and dependency injection.

Considering the difficulty in getting MEF working, and the fact that there are better IoC frameworks for MVC, I should have scrapped MEF in favor of Ninject –which has made me very happy in dozens of other projects. I stuck with MEF partly because it would pay off when I got to the Silverlight client, and partly because I liked the challenge that MEF presented.

Sadly, I was only three weeks into development on TicketDesk Silver, the Silverlight client, when Microsoft released Silverlight’s obituary. I had two other projects under development with Silverlight at the time, so that was a very bad summer for me.

The modular design of TicketDesk’s business layer is mostly about testability. EF 4 was quite hostile to unit testing, so I did what everyone else was doing… I wrapped the business logic in unit-of-work and repository patterns, and made sure the dependencies targeted abstract classes and interfaces. If you want to get all gang-of-four about it, the service classes in TD2 are more transaction script than unit-of-work, but it gets the same job done either way. This gave me the level of testability I needed to follow a (mostly) TDD workflow.

One thing I have never liked about heavy unit testing, and TDD in particular, is having to implement complex architectures purely for the sake of making the code testable. I’ll make some design concessions for testability, but I have a very low tolerance for design acrobatics that have nothing to do with an application’s real business requirements.

TicketDesk 2 walks all over that line. I dislike that there are a dozen or more interfaces that would only ever have one (real) concrete implementation. Why have an interface that only gets inherited by one thing? I also dislike having attributes scattered all over the code just to describe things to an IoC container. Neither of those things make TicketDesk work better. It just makes it more complex, harder to understand, and harder to maintain.

On the flip-side, I was able to achieve decent testability without going too far towards an extreme architecture. The unit tests did add value, especially early in the development process –They caught a few bugs, helped validate the design, and gave me some extra confidence.

If you noticed that the current source lacks unit tests, bonus points to you! My TDD experiment was never added to the public repository. I was pretty new to TDD, and my tests were amateurish (to be polite). They worked pretty well, and let me experience real TDD, but I didn’t feel that the tests themselves made a good public example of TDD in action.

TicketDesk 3 – Modularity where it matters:

A lot has changed for the better since I worked on TicketDesk 2.

Some developers still write their biz code in a custom unit-of-work and repository layer that abstracts away all the entity framework stuff; which is fine. But when EF code-first introduced the DbContext, it became much friendlier towards unit testing. The DbContext itself follows a unit-of work pattern, while its DbSets are a generic repository pattern. You don’t necessarily need to wrap an additional layer of custom repository and unit-of-work on top of EF just to do unit testing anymore.

I plan to move most of the business logic directly into the code-first (POCO) model classes. Extension methods allow me to add functionality to any DbSet<T> without having to write a custom implementation of the IDbSet interface for each one. And the unit-of-work nature of the DbContext allows me to put cross cutting business logic in the context itself. Basically, TD 3 will use something close to a true domain model pattern.

As for dependency injection, the need to target only interfaces and abstract types has been reduced. An instance of a real DbContext type can be stubbed, shimmed, or otherwise mocked most of the time. In theory, I should be able to target stubbed/shimmed instances of my concrete types. If I find the need to target abstracts, I can still refactor the DbSets and/or DbContext to inherit custom interfaces. There still isn’t a compelling need to wrap the business logic in higher layers of abstraction.

In TicketDesk 3, I will not be using a TDD workflow. I love unit testing, but am traditionally very selective about what code I choose to test. I write tests for code that will significantly benefit from them –complex and tricky code. I don’t try to test everything. Using TDD as a design tool is a neat thought process, but I find that design-by-test runs counter to my personal style of design. I can easily see how TDD helps people improve their designs, but I personally tend to achieve better designs when I’m coding first and testing later.

When I do get to the need for dependency injection, I plan to run an experimental branch in TicketDesk 3 to explore MEF 2 a bit further. I think they have fixed the major issues that made MEF 1 hard to use in web environments, but it is almost impossible to find good information online about MEF 2. The documentation, when you can find it, is outdated, contradictory, and just plan confusing. What I have found suggests that MEF 2 does work with MVC 4, but still requires some custom infrastructure. What I don’t know is how well it works.

With the need for dependency injection reduced, few compelling extensibility requirements on the back-end, and no plans to do heavy unit testing, I am more inclined to go with Ninject. They care enough to write top-notch documentation, and it was designed explicitly for the purpose of acting as an IoC container… which is the feature set TicketDesk actually needs.

TicketDesk 3 Dev Diary – Localization and Internationalization

worldOne of the most frequently requested features for TicketDesk 2 was support for localization. TicketDesk is a stringy application; lots of system generated text that will end up in the user’s face at some point. Localizing TD2 required combing through the source code, line-by-line, translating magic strings by hand.

Clearly, this is not an acceptable approach with TicketDesk 3.

Since localization is thorny, and a weak spot in my own skill-set, I consider it essential to designed for localization as early in the process as possible… and now that the code has gotten to the point where it draws a useful UI, it is time to get started.

In the typical Asp.Net application, localization is mostly just a matter of creating resource files that contain the text translations, then making sure the code only gets text from those resources. There is a lot of support in .Net around localization, cultures, and resource files, so this is pretty easy to do. The only difficult part, for the mono-lingual developer, is getting the text translated into those other languages in the first place.

TicketDesk 3 is a SPA application, which presents a different problem. The UI is mostly HTML and JavaScript, so all that nice .Net localization stuff is unavailable when generating the screens that users will actually see. So the first step was to find a JavaScript library for localization; something that does the same job as .Net resource files. The second challenge was connecting that JavaScript library to the server-side .Net resource files.

Fortunately, there is a fantastic JavaScript library called i18next that fits the bill.

Translations in TicketDesk 3:

i18next follows a pattern similar to server-side .Net resource files. You supply it with json files that contain translated text. Once i18next has the text, it takes care of binding it to the UI via an HTML data-* attributes, or through javascript functions directly. As a bonus, i18next is easy to use in conjunction with knockout’s own UI binding.

TicketDesk performs text functions on the server too, so it still needs resource files, so I wanted to be able to pipe the contents of the resource files to i18next directly, rather than maintaining separate translation files for the server and client. For this, I leveraged Asp.Net Web Api. Configuring i18next to get its json translations from Web Api is simple –just map the URLs it uses to Web Api endpoints.

The Web Api controller itself was a bit more complex. It has to detect the specific language that i18next is requesting, then build an appropriate response in a format i18next can consume. The controller loads a ResourceSet for the requested culture, then loops through the properties to build a dynamic key/value object with all the translated text. Once that’s done, it outputs the dynamic collection as a json response.

i18next has a richer set of capabilities than straight .Net resource files. Resource files are just simple name/value lookups. With i18next, the translation files can have nested properties, replacement variables, and there are features for interpolation (plural forms, gender forms, etc.). These features are available in .Net with some advanced language frameworks, but the basic resource files don’t go that far. Fortunately, TicketDesk only needs the basic set of features, so a flat name/value lookup should be sufficient to get the job done; though it doesn’t leverage some of i18next’s more advanced features.

Localization is more than text translations. TicketDesk also deals with numbers occasionally, and there are some dates too. Fortunately, it isn’t a number heavy application, nor are are there user editable dates or numbers.  The moment.js library easily handles local date display formatting, and numeral.js can handle the couple of numbers.

The main weak point in TicketDesk 3’s localization will be an absence of structural language transformations. Once you get into right-to-left languages and other exotic forms, the physical layout of the entire UI has to change. Sadly, I do not have the expertise to correctly design for such languages.  HTML 5 and CSS 3 do have decent support for this kind of cultural formatting though, so my hope is that anyone needing to localize for these languages can do so themselves without difficulty.

Internationalization:

My intention for TicketDesk 3 was simple localization; the admin would tell the server what language to use, and the system would just generate the UI in that language for all users. I did not initially expect to support dynamic internationalization — the ability to support multiple languages based on individual user preference.

When I got into the details of the i18next implementation though, it quickly became apparent that TicketDesk 3 could easily achieve real internationalization… in fact, internationalization would be about as easy as static localization.

The result is that TicketDesk 3 will be internationalized, not just localized. It will detect the user’s language and dynamically serve up a translated UI for them –as long as resource files exist for their language. If translations for their specific language and culture aren’t available, it will fall back to the best-matching language, or to the the default if no better match exists.

State of the Code:

I have the plumbing for internationalization in place in the alpha already. It auto-detect’s the browser’s language, or you can override it via the query string (e.g. ?setLng=es-MX). Since I don’t speak any other languages, I can’t provide any real translations myself. For the short term, I have created a generic Spanish resource file, into which I copied the English text surrounded by question marks. This isn’t real localization, but it serves to validate that the localization support works correctly.

For dates, I’m using moment.js, so it should adapt to the browser’s language settings automatically, but I haven’t setup moment to use the querystring override yet… I’ll get to that soon though. I’m not doing any number formatting yet, but when I do I’ll implement numeral.js or a similar library.

When TicketDesk 3 gets into beta, and I have the full list of English text strings in resource files, then I will get a native Spanish speaker to help generate a real set of translations. Hopefully, the community will pitch-in to provide other languages too.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure. I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. But feel free to play around with it. To play with localization, either change your browser’s language to something spanish (es, or es-MX, or similar), or use the querystring override: ?setLng=es

TicketDesk 3 Dev Diary – SignalR

One of the overall design goals of TicketDesk since version 1 has been to facilitate near-frictionless, bi-directional communication between help desk staff and end-users. Tickets should evolve as a natural conversation, and the entire history of those conversations should be right there in the ticket’s activity log. TicketDesk has done a reasonably good job in this area, but SignalR presents an opportunity to take this idea to a whole different level.

The goal behind the SignalR library is to give the server a live, real-time channel to code running on the client. The server can push notifications whenever things change, and that information is available to the user immediately. The techniques that SignalR use to achieve this are not entirely new, but have historically been difficult to implement.

TicketDesk 3 uses breeze on the client, and breeze tracks all the entities it has retrieved from the server. Knockout is used to bind those entities to the UI for display. The beauty of this combination is that Knockout automatically updates the UI anytime the entities in breeze change.

With SignalR, the browser can listen in the background for updates from the TicketDesk server. When the server notifies the client that a ticket has changed, the client can then choose to fetch the new data in the background, and update the local copy being tracked by Breeze… and Knockout will automatically refresh the display to show that updated data to the user.

The best thing about SignalR is that it is trivially easy to setup, and with the combination of Breeze and Knockout it is super simple for the UI to respond intelligently.

As a proof of concept, I have coded up a simple SignalR hub that will tell all connected clients when a ticket changes (and what the ID of the changed ticket is). The client will check to see if it is tracking a ticket with that ID, and if so it will automatically fetch a new copy of the ticket from the server. Anything on the screen that is bound to that ticket will automatically update to show the changes. This was not only very easy to implement, but it seems to work very well.

I then took it a step further, and coded up several special routines for the ticket list view. Not only does it update the tickets displayed on screen, but it also responds intelligently to changes in the number of open tickets, or changes of the order of the tickets within the list.

This list view, as currently implemented in the alpha, is a paged list showing 5 items on screen at a time. Because the list is sorted by the last updated date, anytime a ticket is updated the order of items in the list changes too. If a ticket is closed or re-opened, the number of items will grow or shrink change. Pager buttons may need to be disabled or enabled, items on the current page may change as tickets are shuffled around, and the current page might not even be valid anymore if the number of open tickets shrinks enough.

With very little effort, I was able to code up the list view that dynamically responds to real-time changes on the server, and keeps itself current without the user ever needing to explicitly refresh the screen.

I plan to use the set of capabilities around SignalR to make the entire application behave in near real-time. The Ticket activity log will behave like a real-time chat conversations, lists will automatically adjust as things change, and notifications will appear to keep the user informed.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure. I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. But feel free to play around with it.

To see the SignalR behavior in action, just open the site in two browsers at the same time. Make changes to a ticket in one, and watch the other browser update itself.

TicketDesk 3 Dev Diary – Hot Towel

toweliconFor TicketDesk 3, what I most hope to achieve is an improvement in the overall user experience. Since I wrote TicketDesk 2, much has happened in the JavaScript world. New JavaScript frameworks have matured, and enable deeper user experiences with much less development effort than ever before. TicketDesk is a perfect candidate for Single Page Application (SPA) frameworks, so all I had to do was pick a technology stack and learn to use it.

I have decided to start from the wonderful Hot Towel SPA by John Papa. Hot Towel is a visual studio project template that combines packages from several different client and server frameworks. It includes Knockout.js for UI data-binding, Durandal for routing and view management, and Breeze for talking to the ASP.NET Web Api backend.

My main reasons for choosing Hot Towel are:

  • It is a complete end-to-end SPA Template.
  • It is well documented.
  • The components it relies on are reasonably mature, and well maintained.
  • There are good sample applications built on Hot Towel.
  • John Papa published an excellent, and highly detailed, video course for Hot Towel at Pluralsight.
  • It is very easy to learn and use.

One of the disappointments when switching from server-side Asp.Net to a SPA framework is that the UI is pure JavaScript, HTML, and CSS. It makes almost no use of MVC controllers or views, which always makes me feel like I’m wasting valuable server capabilities. A SPA does make heavy use of Asp.Net Web Api for transferring data, but the UI leaves all that wonderful Asp.Net and Razor view engine stuff behind.

Once I learned by way around Hot Towel, I was surprised to find that working with Knockout, Durandal, and Breeze on the client is much easier than working with Asp.Net on the server. I’m no fan of JavaScript as a language, but the current crop of JavaScript frameworks are truly amazing.

Now that I’ve learned my way around Hot Towel’s various components, I’ve been able to develop a fairly advanced set of UI features very quickly. The current UI is very raw and only provides a primitive set of features, but it has already exceeded my initial expectations by several orders of magnitude.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure . I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. but feel free to play around with it.

Asp.net 4.5 mvc or webforms – model binding dropdownlist to an enum with description attributes

One of the more common problems people encounter when working with any .Net user application is the need to put a UI on top of some enumeration. Normally you need to present a friendly list of all the possible items in the enumerator list, then allow the user to pick one of them. This UI typically takes the form of a drop down list, combo box, list box, or similar.

Enums are wonderful in C#, but unlike some other languages, they are also a very thin type. Enums define a collection of named constants. By default, each enumerator in the list equates to an underlying integer value.

Here is an example Enum, and for clarity I’ve specified the value for each enumerator:

public enum CarMake
{
    Ford,     //0
    Chevy,    //1
    Toaster   //2
}

Enums are lightweight, highly efficient, and often very convenient –until you start trying to map them to a UI anyway.

Each of the items, enumerators, within the enum have a name, but the name cannot contain white space, special characters, or punctuation. For this reason, they are rarely user-friendly when converted to a string and slapped into your dropdown lists.

Enter the DescriptionAttribute (from the System.ComponentModel namespace). This attribute allows you to tag your enumerators with a nice descriptive text label, which will fit the UI pretty well if only you can dig the value up. Unfortunately, reading attributes is a cumbersome job involving reflection.

Here is the same enum decorated with descriptions:

public enum CarMake
{
	[Description("Ford Motor Company")]
	Ford,     //0

	[Description("Chevrolet")]
	Chevy,    //1

	[Description("Kia")]
	Toaster   //2
}

To bind up an enum to a drop down list, many developers tend to just manually hard-code the list with user-friendly text values and corresponding integer values, then map the selected integer to right enumerator on the back-end. This works fine until someone comes along later and changes the enum, after which your UI is horribly busted.

To get around this mess, I’ve put together a set of extensions that solves this problem for the common enum to drop down list cases.

Note: I’m using the SelectList class, which comes from Asp.net MVC, as an intermediary container. I then bind the SelectList to the appropriate UI control. You can use SelectList in Asp.net webforms, and most other UI frameworks as well, but you’ll need to implement the code for SelectList. The easiest way to do this is to include the source files for SelectList into your own projects.

The code for SelectList can be found on the AspNetWebStack project page over at CodePlex. Here are the three files needed for SelectList :

The first step in solving the problem is to have an extension method that takes care of reading the description from the enumerators in your enum.

public static string GetDescription(this Enum enumeration)
{
	Type type = enumeration.GetType();
	MemberInfo[] memberInfo = type.GetMember(enumeration.ToString());

	if (memberInfo != null && memberInfo.Length > 0)
	{
		var attributes = memberInfo[0].GetCustomAttributes(typeof(DescriptionAttribute), false);

		if (attributes.Length > 0)
		{
			return ((DescriptionAttribute)attributes.First()).Description;
		}
	}
	return enumeration.ToString(); ;
}

To get an enumerator’s description using this extension method:

string text = CarMake.Ford.GetDescription();

The next challenge is to build a select list for the enum.

public static SelectList ToSelectList(this Enum enumeration, object selectedValue = null, bool includeDefaultItem = true)
{
	var list = (from Enum e in Enum.GetValues(enumeration.GetType())
				select new SelectListQueryItem<object> { 
				   ID = Enum.Parse(enumeration.GetType(), Enum.GetName(enumeration.GetType(), e)), 
				   Name = e.GetDescription() }).ToList();

	if (includeDefaultItem)
	{
		list.Insert(0, new SelectListQueryItem<object> { ID = null, Name = "-- select --"});
	}
	return new SelectList(list, "ID", "Name", selectedValue);
}
internal class SelectListQueryItem<T>
{
    public string Name { get; set; }
    public T ID { get; set; }
}

To get the select list using this extension, you just new-up the enum and call the method:

var carSelectList = new CarMake().ToSelectList();

This extension has an optional parameter to set which item in the list should be selected, and another optional parameter that will include a default item in the SelectList (you may want to adjust this extension to match your own convention for default items).

Here is an example that sets a selected value and includes a default item:

var carSelectList = new CarMake().ToSelectList(CarMake.Ford, true)

I’ve been working in C# for over ten years, and until just this week I had no idea that you could instantiate an enum with the new keyword. Of course I cannot think of reason why you’d normally want to new-up an enum either, but the capability is handy for this extension.

You can also use this extension by calling it on one of the specific enumerator items too. This example does exactly the same as the previous example:

var carSelectList = CarMake.Ford.ToSelectList(CarMake.Ford, true);

I personally prefer the new-up pattern in this case, since it isn’t intuitive that calling ToSelectList on a specifc item would return the list of ALL items from the containing enum.

Now that we have the SelectList, all we have to do is bind it up to a DropDownList.

In Asp.net WebForms 4.5, using model binding, this looks like this:

<asp:DropDownList ID="CarMakeDropDown" runat="server" 
	SelectMethod="GetCarMakeDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# BindItem.CarMakeValue %>" 
/>
protected SelectList GetCarMakeDropDownItems()
{   
	return new CarMake().ToSelectList();
}

For more in-depth examples of various techniques for binding SelectList to DropDownList in webforms, see my previous article titled “Asp.net 4.5 webforms – model binding selected value for dropdownlist and listbox“.

In Asp.net MVC model binding to a SelectList is a very common and routine pattern, so I’m not going to provide a detailed example here. Generally though, you make sure your model includes a property or method for getting the enum’s SelectList, then you use the Html.DropDownList or Html.DropDownListFor helper methods. This would look something like this:

@Html.DropDownListFor(model => model.CarMakeValue, Model.CarMakesSelectList)

 

 

Asp.net 4.5 webforms – model binding selected value for dropdownlist and listbox

With the Asp.net 4.5 release, webforms inherited several improvements from its Asp.net MVC cousin. I’ve been enjoying the new model binding support lately. Since none of the old datasource controls fully support Entity Framework’s DbContext anyway, model binding is quite handy when using code-first EF models.

For the most part, model binding is straight forward. The basic usage is covered in the tutorials on the asp.net web site, but other resources are rare and hard to find. In the case of DropDownList and similar controls, I found that model binding in webforms was not as straight-forward as I would have thought — especially when trying to set the selected value.

Before I begin, let me explain about the SelectList class.

These examples are valid no matter what data you bind to, but the ItemType that I’m showing in these examples use an implementation of the SelectList class, which I’ve borrowed from asp.net MVC. I don’t like to reference MVC assemblies in my webforms applications, so I just copy in source files to my webforms application. Using SelectList gives you a consistent and strongly typed collection, which tracks each item’s display text, value, and its selected state. SelectList acts as a sort of miniature view-model.

The code for SelectList can be found on the AspNetWebStack project page over at CodePlex. Here are the three files needed for SelectList :

How you bind your dropdownlist depends a lot on if it appears inside some other model bound control or not. To bind a dropdownlist inside of a model bound container (repeater, listview, formview, etc) looks something like this:

*.aspx.cs

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# BindItem.EdCodeType %>" 
/>

*.aspx.cs

protected SelectList GetMyDropDownItems()
{
    var items = from t in someDbContext.AllThings
    select new { ID = t.ID, Name = t.Name };
    return new SelectList(items, "ID", "Name");
}

Note: SelectedValue property does NOT show up in intellisense. This appears to be a bug caused by the fact that this property was marked with the BrowsableAttribute set to false (for mysterious reasons). 

When working with a dropdownlist that is not conveniently nested within a model bound container control, binding the dropdown is still fairly simple. You have three options. You can explicitly declare a selected value, if you know what it is at design-time and it never changes. If that isn’t the case, then you can set the SelectedValue property to the results of some method call, or wire up an ondatabound event handler to set the selected item. Here are the examples:

Declarative example: Set SelectedValue to a known value (rarely helpful):

*.aspx.cs

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="1234" 
/>

Declarative example: Set SelectedValue to the result of a method call:

*.aspx

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# GetSelectedItemForMyDropDown()%>"
/>

*.aspx.cs

private SelectList myDropDownItems;

protected SelectList GetMyDropDownItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myDropDownItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myDropDownItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myDropDownItems;
}

protected string GetSelectedItemForMyDropDown()
{
	var selected = GetMyDropDownItems().FirstOrDefault(i => i.Selected);
	return (selected != null) ? selected.Value : string.Empty;
}

Event example: Set Selected item from an event handler

*.aspx

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	OnDataBound="MyDropDown_DataBound" 
/>

*.aspx.cs

private SelectList myDropDownItems;

protected SelectList GetMyDropDownItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myDropDownItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myDropDownItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myDropDownItems;
}

protected void MyDropDown_DataBound(object sender, EventArgs e)
{
	var ddl = (DropDownList)sender;
	var selectedValue = GetMyDropDownItems().FirstOrDefault(i => i.Selected);
	if(selectedValue != null)
	{
		ddl.Items.FindByValue(selectedValue.Value).Selected = true;
	}
}

With the ListBox control, and controls similar, you can employ the same techniques as long as you only allow single item selection. If you need to support multiple selection though, you can’t just set SelectedValue. Instead, you would use the DataBound event to loop each item to selecting the appropriate ones.

*.aspx

<asp:ListBox ID="MyListBox" runat="server"
	SelectionMode="Multiple"
	SelectMethod="GetMyListBoxItems"
	ItemType="Weber.Vfao.Inside.Web.SelectListItem"
	DataTextField="Text" 
	DataValueField="Value" 
	OnDataBound="MyListBox_DataBound"
/>

*.aspx.cs

private SelectList myListBoxItems;

protected SelectList GetMyListBoxItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myListBoxItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myListBoxItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myListBoxItems;
}

protected void MyListBox_DataBound(object sender, EventArgs e)
{
	var lb = (ListBox)sender;
	foreach (var item in GetMyListBoxItems())
	{
		if (item.Selected)
		{
			b.Items.FindByValue(item.Value).Selected = true;
		}
	}
}