Facebook Gplus LinkedIn RSS
it makes me want to gouge my eyes out with a cheese grater
magnify
formats

Entity Framework: It’s not a stack of pancakes!

Published on April 7, 2014 by in Code

I’ve been talking to a lot of line-of-business developers lately. Most have adopted newer technologies like Entity Framework, but many are still working with strictly layered application designs. They’ll have POCO entities, DbContexts, DbSets, and all the modern goodness EF brings. Then they smash it all down into a data access layer, hiding EF behind several layers of abstraction. As a result, these applications can’t leverage EF’s best features, and the entire design becomes very cumbersome.

I blame n-tier architectural thinking!

Pancakes:

Most senior .Net developers cut their teeth on .net during a time when n-tier was a pervasive discussion across the industry. Every book, article and classroom lecture about web application design included a long discussion on n-tier fundamentals. The n-tier hype faded years ago, replaced by more realistic lego-block approaches, but n-tier inspired conceptualizations still exert a strong influence over modern designs.

I call it pancake thinking — conceptualizing the logical arrangement of an application as a series of horizontal layers, each with distinct, non-overlapping areas of responsibility. Pancake thinking pins Entity Framework as a DAL technology –just another means to get data in and out of a database –a fundamental misunderstanding of EF’s intended role.

Here is a diagram of a full blown n-tier approach applied to an ASP.NET MVC application using Entity Framework:

EF N-Tier Diagram

Note that all of entity framework is buried at the bottom. An object mapper is probably transforming EF’s entities into business objects. So, by the time information leaves the DAL, EF has been beaten completely out of it. EF acts only as a modernized version of traditional ADO.NET objects. EF is a better experience for DAL developers, but it doesn’t add value for those writing code further up the stack.

I don’t see as many strictly layered designs like this in the wild, though I have come across a few recently. Most of the current literature around EF advocates a somewhat hybrid approach, like this:

EF n-Tier Alternate Design

In this design, EF’s POCOs roam around the business and presentation layers like free-range chickens, but DbContext and DbSets are still being tortured in the basement.

What I dislike about this design is that EF has been turned on its head. The DbContext should be at the top of the stack, acting as the entry point through which you interact with entities.

So, how should a modern EF driven application be designed?

EF’s POCO entities, along with DbContext and DbSets, work best with a behavior driven approach to application design. You design around the business-behaviors of your application, largely without concern for what the database looks like.

This is often called a domain-model architecture, and follows the general ideas of “domain driven design” (DDD). Instead of a horizontally segmented architecture, EF becomes a pervasive framework used throughout the entire application at all levels.

If you are worried about mixing “data-logic” with “business-logic”, then you are still thinking about pancakes. Stop it!

Here is another diagram, this time letting Entity Framework do it’s thing, without any unnecessary abstractions:

EF Domain Model

Notice first, how much simpler the design becomes. We’ve elevated EF to a true business framework, and we’ve eliminated tons of abstractions in the process. We still have n-tier going on, so relax! We just didn’t logically split data and business code like you might be used to.

To understand how this design works, let’s explore the three main concepts behind Entity Framework.

EF Entity Model:

The heart of EF is the entity model. At its simplest, the model is just a bunch of data transfer objects (DTOs) –together they form an in-memory model mirroring the database’s structure. If you generate a model from an existing database, you get this shape. This is also what happens if you design a code-first model using the same thinking you’d use to design a relational database. This “in-memory database” viewpoint is how most n-tier applications tend to use EF models.

That simplistic approach doesn’t leverage EF’s power and flexibility. It should be a true business domain model, or something close. Your entities contain whatever code is appropriate for their business function, and they are organized to best fit the business requirements.

The nice thing about a business-oriented model is that code operating against the entities feels very natural to object oriented programmers. You don’t concern yourself with how the actual persistence is done; EF takes care it for you. This is exactly the level of abstraction n-tier designs strive for, but EF gives you the same result without the rigid, horizontal layering.

Persistence mapping:

The logical entity design should be based on business behavior, but the actual code implementation does require you to understand how EF handles persistence.

Persistence details do place some constraints on the kinds of OOP acrobatics you can employ in your model, so you need to be aware of how it works. Overall though, the constraints are mild, and shouldn’t keep you from an implementation that remains true to the intent of the business-centric design.

Many properties on your entities will need to be persisted to the database. Others may exist only to support runtime behaviors, but aren’t persisted. To figure out how, or if, properties map to the database, EF uses a combination of conventions and attribute annotations. EF doesn’t care about your entity’s other methods, fields, events, delegates, etc. so you are free to implement whatever business code you need.

EF does a good job of automatically inferring much of a model’s mapping from code conventions alone. If you use the conventions appropriately you can get a head-start on your persistence mappings –no code needed. For properties that need more explicit definitions, you use attributes to tell EF how to interpreted them.

For really advanced cases, you can hook into the model builder’s fluent API. This powerful tool lets you define tricky mappings that attributes and conventions alone can’t describe fully. If your model is significantly dissimilar from your database’s structure, you may spend a lot of time getting to know the model builder –but it’s easy to use, and amazingly powerful.

While you will need to understand EF persistence issues, you only need to concern yourself with them when you implement the entity model. For code using that model, these details are highly transparent –as they should be.

Repositories and Domain Aggregates:

The final piece of EF is the part so many people insist on hiding –the DbContext and DbSets. If you are thinking in pancakes, the DbContext seems like a hub for accessing the database. N-tier principals have trained you to hide data access from other code, at all costs!

Typically, n-tier type abstractions take the form of custom repositories layered on top of entity framework’s objects. Only the repositories may instantiate and use a DbContext, while everything at higher layers must go through the repositories.

A service or unit-of-work pattern is usually layered on top of the custom repositories too. The service manages the repositories, while the repositories manage EF’s DbContext and DbSets.

If you’ve ever tried to layer an n-tier application on EF like this, you probably found yourself fighting EF all over the place. This abstraction is the source of your pain.

An EF DbContext is already an implementation of a unit-of-work design pattern. The DbSet is a generic repository pattern. So you’ve just been layering custom unit-of-work and repositories over top of EF’s unit-of-work and repositories. That extra abstraction doesn’t add much value, but it sure adds a lot of complexity.

Ideally, the DbContext should be the root business service –what domain driven design calls an “aggregate root”. The most important thing to understand is that this belongs at the top of the business layer, not buried under it.

Your entities directly contain the internal business logic appropriate to enforce their behaviors. Similarly, a DbSet is where you put business logic that operates against the set of an entity type. Anything that you’d normally put in custom repositories can be added to the real DbSet instead. You do this either through extension methods, or through inheritance.

Extension methods let you extend a DbSet on the fly. They are fantastic for dealing with business context specific concerns, and you can have a different set of extension methods for each business context. Calling code can chose which set of extensions are appropriate for their context, and ignore extension behaviors from the other contexts.

You can also write a custom class that inherits a specific DbSet<T>. This lets you extend behavior of the base type, as well as override the DbSet’s standard behavior. Inheritance is more powerful than extension methods, but it doesn’t handle context specific behaviors quite as elegantly.

For cross-cutting concerns that span multiple entity types, you can extend the DbContext itself. A common approach is to have multiple concrete DbContexts for each business context. The business specific class inherits a common DbContext base, which is where the EF specific stuff lives –factory and adapter patterns often appear here too. The key concept is that each top-level service is derived from a real DbContext.

If you embrace EF’s DbContext as your top-level business service, either directly or through inheritance, then you will find that using EF can be a very pleasant experience. You are able to leverage its full power at all layers of your application, and friction with EF’s internals disappear. Using custom abstractions of your own, it is hard to reach this level of fluidity.

The Domain Model you’ve read about online:

If you go online and read recent articles about ASP.NET application designs, you’ll find many advocates of domain model designs. This would be great, except that most of them still argue for custom unit-of-work and repository patterns.

The only difference between these designs, and the hybrid n-tier layering design I described before, is that the abstractions here are a true part of the business layer, and are placed at, or near, the top of the stack.

EF Testable Domain Model

While these designs are superior to pancake models, I find the additional custom abstraction is largely unnecessary, adds little value, and usually creates the same kinds of friction you see in the n-tier approaches.

The reason for the extra layer of abstraction seems to have two sources. Partially it comes from that legacy n-tier thinking being applied, incorrectly, to a domain model. Even though it avoids full layering, the desire for more abstraction still comes from the designer’s inner-pancake.

The bigger force advocating for extra abstractions comes from the Test Driven Development crowd. Early versions of EF were completely hostile to testing. It took insane OOP acrobatics and deep abstractions even to get something vaguely testable.

In EF 4.1, code-first was introduced. It brought us the first versions of DbContext and DbSets, which were fairly friendly towards unit testing. Still though, dependency injection issues usually made an extra layer of abstraction appealing. These middle-versions of EF are where the design I’ve just diagrammed came from.

In current EF versions (6 or higher), DbContext and DbSets are now pervasive throughout all of EF. You can use them with model-first, database-first, and code-first approaches. You can also use them with POCOs, or with diagram generated entities (which are still POCOs in the end). On the testability front, EF has added numerous features to make native EF objects fully, and easily testable without requiring layers of custom abstraction.

You can, through a bit of experimentation, learn how to write great unit tests directly against concrete instances of DbContext and DbSet –without any runtime dependency on a physical database.

How to achieve that level of testability in your model is a topic for another post, but trust me… you don’t need custom repositories for testing EF anymore. All you need is to be smart about how you compose your code, and maybe a little help from a mocking framework here and there.

I’ve kept most of this discussion pretty high-level. Hopefully it will help expand how you view EF based application designs. With some additional research, you should be able to take these ideas and turn them into real code that’s relevant to your own business domain.

 

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Evolving Custom Applications into Resalable Products (expanded edition)

Published on March 16, 2014 by in Code

You’ve been commissioned to build a custom application for a particular client. In addition, you or the client wants the option of putting the application up for resale when it’s done. 

How should you approach this kind of software project?

Custom software takes a lot of time, and is very expensive for the client. They will sink a lot of money into design and development long before they even see the product, so it isn’t unusual to be asked to design for resale. This is a smart move. Reselling a custom application can help recoup the massive up-front costs, and it might even open up a whole a new market.

If you get asked to write software like this, do yourself a favor… expect that the initial client is the only client you will ever have. Create an amazing product for the current paying client, and ignore any potential mass-market opportunities for now. If you are successful with your fist client, only then should you start seriously worrying about other clients.

Once you have an initial product, you will almost certainly have to redesign and refactor in order to make it ready for the mass-market, so why not just design for the mass-market from the beginning?

Simple… risk and cost.

A mass-market application needs a robust and extensible architecture, more user and developer documentation, a full suite of built-in admin and configuration tools, installers, and support for multiple runtime and database platforms. It may also need localization, theming, and customization features far beyond what your initial client needs. All of this extra stuff takes time and effort, but has little value to the initial client. If you waste their time and money building to support other customers, then you will under-serve the customer you have now. Worse, you don’t even know if the application will be successful, or if it has the mass-appeal you imagine it does.

Getting to version 1.0:

Use an agile process. You don’t have to use a formal methodology, but stick to the agile basics. Code the bare minimum you can get away with, deploy it to your paying customer for testing, get their feedback, then go back and code some more. Repeat until you have all of the 1.0 requirements implemented.

If you are doing this under contract, I suggest that you price your deliverable in iterations too. Embrace your customer’s tendency to introduce feature creep, and shift the requirements as you go. Be eager to change the design, just make sure you have a good understanding (and a contractual agreement) around change management –make sure you are getting paid for scope increases, no matter how small they are at first. Once you start doing extras for free, it will be difficult to charge for bigger change requests later.

Brutally cut everything you can from the client’s initial requirements list. Keep it simple, and implement only the bare essentials. Your client will insist that some fluffy features are “essential”. Ignore them and cut them from your implementation plan. You don’t have to tell the client what you’ve cut, just tell them you have their pet features scheduled for “a later iteration” –even if you don’t. As you deploy iterations towards a 1.0 release, your client will forget about many of those fancy features. Over time you will also get a better feel for what is truly important to the application’s success. Don’t waste time writing a bunch of low-value glitz that isn’t necessary –save that stuff for later versions.

Make absolutely sure that every user-facing feature you implement is amazing. It should be pretty, function smoothly, perform well, and be feature complete. It is better to tell the client that you haven’t implemented the user manager than to show them an ugly or incomplete user manager. Everything they see on screen, ever, should look and act like a finished product. Never demo anything that might disappoint.

Avoid over-architecting. It is tempting to layer and componentize everything, follow all the best practices, use the all the recommended design patterns, and leverage all those fancy OOP techniques you’ve been reading about.

Don’t do it!

Deliver the necessary features to get the job done, and do it as fast as possible. Where an advanced technique or best practice would reduce your own efforts, go for it! But, if you can’t say exactly how a chosen architecture moves the application towards the finish-line, scale it back.

High-coverage unit testing may be super trendy right now, but carefully consider how much testing you should commit to. Unit tests do not meet any of your application’s actual business requirements. Unit tests, collectively, are a complete application in their own right. They have to be designed, coded, maintained, and documented just like code for any other application. If you do heavy testing, you are effectively building a second application alongside the first, but are only being paid for one of them. That doesn’t mean you shouldn’t do unit testing, even high-coverage testing. But you must be as aware of the costs of testing as you are the benefits –especially when considering a test-first methodology like test driven development (TDD) or behavior driven development (BDD).

If you are working in a dynamic language, high-coverage testing is almost always a good idea. These languages tend to have weak design and compile time validation, and their code analysis tools are often quite limited. High-coverage tests compensate for the lack of compile time checking and weak code analysis tools. Also, in dynamic platforms, unit tests are usually easy to write, and have very low maintenance overhead. You won’t have to spend much effort on abstract architectural acrobatics, and you probably won’t need a lot of test-specific infrastructure. So test away!

For static language environments, the benefits vs. costs of high-coverage testing is much more complex. Some testing is a given in any project, but should you go for high-coverage testing, or a full blown test-oriented development methodology like TDD?

Unit tests have more value with larger projects and larger teams. It also has more value if your team has a mix of developers at different experience levels. For experienced developers and smaller projects, you often get better results just using good code-analysis tools in conjunction with compiler validation. In these cases, you can reserve unit tests for code routines that really need them –complex, tricky and critical code segments.

Also consider that high-coverage testing is much harder in static language environments. Often testing necessitates horribly complex architectural designs that wouldn’t be necessary otherwise. For non-trivial applications, you’ll also end up with a lot of test-specific infrastructure, mocking frameworks, and the need to develop deep expertise with the testing tools and technologies. So, commit to high-coverage testing only if you are really sure that the benefits to the development team are really worth the costs.

You should move 1.0 forward without much consideration for the mass-market, one concession you should make is to avoid 3rd party code libraries where you can. This includes 3rd party frameworks as well as packaged UI component suites. If you cannot produce an essential feature with built-in components, or can’t write your own easily, then a 3rd party component may be necessary. Try to stick to open source components if you can. Even though you aren’t worried about the retail market yet, you don’t want to trap yourself into dependencies on 3rd party technologies that you can’t legally redistribute, or that increase the price you will have to charge for your retail product.

Don’t worry about making everything configurable via online admin tools in version 1.0. Sure, it is nice to give your initial client built-in admin tools, but it adds complexity and time. Admin tools are not a frequently used feature-set, so they offer a lower return on investment. To the extend you can, stick to config files or manual database entry. Fancy GUI tools can wait until the core application has proven itself successful enough to justify the additional investment in management tools.

Stick to the simplest data access mechanism that meets your needs, and don’t worry about supporting multiple database platforms. Pick a database platform that the paying client can live with, and don’t look back.

In version 1.0, the most important feature set is likely going to be reporting. If your application has any reporting needs, and most do, you need to be sure you start with reports that dazzle your client from day one. Be sure the reports are pretty, highly interactive (sorting, filtering, paging, etc.), and are highly printable. Reporting is the part of your application that your client’s management will use the most, and they hold the purse strings. Knock them dead with killer reports!

Later, if you get to the retail market, reporting will also be the feature that makes or breaks most new sales.

After version 1.0: getting to the retail market:

I can’t tell you how to find and convince other people to buy your amazing application. What I can tell you is this: if 1.0 makes your initial customer happy, then you have an application that will probably appeal to other clients too.

You can probably make a few minor adjustments to the 1.0 product, and deliver it to other clients as-is. Get it into the hands of as many clients as you can, with as few changes as you can, as soon as you can.

Wait? Didn’t I say earlier that mass-market products required tons of extra stuff? Admin tools, documentation, installers, and all that?

Well, I lied. You will want that stuff for version 2.0 of course, but you can usually take a successful 1.0 product to market, even if it is still rough. Sure, it lacks customization, has crappy documentation, and poor admin tools… but if it works well enough for one client, it probably works well enough for at least a few others. So don’t wait on version 2.0, go ahead and try to get 1.0 out there now! 

Why the rush?

You want to recoup the sunk costs of the initial development as soon as possible.

You need as many real clients as you can get so you can gauge if there are additional requirements that need to be addressed in the next version.

You can learn from potential clients that decline to buy your application. Ask them, politely, why they passed on your product. Is it too expensive? Do they have a better competing product already? Does your app lack essential features?

So… what about version 2.0?

If 1.0 was successful with your paying customer, you will likely be heading towards a 2.0 product even if you don’t have a retail market –your initial client will still want improvements. If you can’t reach a larger market soon after 1.0 is done, consider abandoning the larger market. This will keep the costs low for your one paying customer, and will reduce the scope of 2.0 considerably. Concentrate on just enhancing the application around your one customer’s more ambitious needs, and shore up the weaknesses in your initial design.

If there is a retail market buying your product, then keep in mind that developers may be part of your customer base. Developers may need to extend or interoperate with your application, so make sure 2.0 has good developer documentation, consistent APIs, and web service APIs.

Version 2.0 should mostly be about fixing up any sloppy code in 1.0 and improving the design to support those features you cut from the client’s 1.0 wish-list. Re-design, re-architect, and re-code the core framework to support future enhancements going forward. You probably aren’t quite ready for a lot of new user-facing features yet, so save most of those for versions after 2.0.

Version 2.0 is where you start thinking about extensibility, customization, internationalization, and advanced administration. This is also the time to give scalability more serious thought. What about really large customers with lots of data? Are you going to support multi-server or cloud deployments? Should you aim to provide the product as a software service, instead of a packaged product?

This is also the time to consider if the application needs interoperability with other products… perhaps you need to support data imports for customers migrating from a competing product, or using your product in conjunction with a larger suite of products.

This is also the time to reconsider your development team itself. Do you need to contract outside resources, or hire more people? Do you need visual and graphic designers? Do you need code optimization or big-data specialists? What about hiring a marketing firm? How about payment, billing, and sales?

One thing you should consider not doing, in any version, is supporting multiple database or server platforms. Pick the platform that makes the most sense given the development platform. The odds are very good that most clients can deal with your choice, even if they prefer another database or server platform. Is the cost of developing for multiple platforms going to generate enough new sales to pay for itself? Probably not. You’d be better off offering your product as a cloud service, than trying to deal with multi-platform applications. The exception is mobile applications, where you should certainly support multiple platforms right from the start.

And that’s really all I have to offer… once you get to designing version 2.0, you will know better than I do what you should focus on.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
2 Comments  comments 
formats

Die Twitter!

Starting last summer, my twitter account started getting hijacked to send out marketing spam. No biggie, I thought, I’ll just change the password. A few days later, hacked again… then again… then again…

I unhooked Twitter integration from all my other sites and services, pulled all the app permissions, and even wrote a quick app to generate a random 14 character super-strong password (mixed case, special characters, numbers, etc.)

Then… yup! Hacked again.

Twitter would catch on to the hack each time, after my account started spamming random adverts all over the place. They’d disable the account, and email me to change the password.

I don’t use Twitter much. I didn’t have their native apps installed on my phone, nor my computer. I don’t even like Twitter very much. So eventually I just decided to leave the account disabled, figuring that eventually Twitter would tighten up their security.

Then a couple of days ago, I got email telling me the account had been hacked again. I never had re-enabled it after that last hack, but sure enough, when I went to the site I could login using my random generated password no problem.

So that’s it… I’ve killed my twitter account.

I have more than 140 characters of ideas about what Twitter can do with themselves.

 

 
Tags:
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

TicketDesk 3 Dev Diary – Update: One AspNet and Aspnet.Identity

Since most of what I’m working on now is being done in my private repository, I wanted to give everyone a quick update on TicketDesk 3′ s progress.

I’m still working on TD3, but late in the fall it became apparent that I needed to wait on the RTM version of Visual Studio 2013 and the new Asp.net Web API 2 and Identity bits. Now that this stuff has all been released, and most of the dependencies have caught up, I have resumed work on TD3.

The main challenge is incorporating the new Aspnet.Identity framework. I am thrilled that Microsoft finally replaced the old security providers, but the transition to Aspnet.Identity is not all roses. The new framework is not as well documented as I’d like, and guidance on advanced uses is still thin. It is also fairly complex framework that requires a good bit of manual coding, especially when used in conjunction with a SPA front-end. Fortunately for me, Yago Pérez Vázquez has created a template project called DurandalAuth that does exactly what I’ve been trying to do with TicketDesk…. combine the bleeding edge versions of Aspnet.Identity and Web API 2 with the bleeding edge versions of Durandal, Breeze, and Bootstrap.

In fact, his template is so good, that I’m pretty much building TicketDesk 3 on top of his template instead of just porting the authentication stuff into my existing TD3 project… and this is why the project hasn’t been pushed into the public repository just yet… it’s a new project that doesn’t yet have all the features from the old one.

There are a few things about the DurandalAuth template that I’m not so sure about; the use of StructureMap instead of Ninject for IoC, and fact that he’s layered the back end with a custom repository and unit of work pattern… but overall, the design is generally the same as what I had been designing for TD3; except that he’d also implemented the new asp.net identity bits. The template also includes some SEO stuff that isn’t relevant to TicketDesk 3, though I may leave it there in case people want to make use of it.

At present, I’m in the process of combining the new project with the code I’ve already written for TD3, and adapting the design to TD3′s particular needs (internationalization for example). This will take a few more weeks, but once I’m done I will be able to push the new project to the public GitHub repository for everyone else to look at.

The technology stack for TD3 is now complete; and includes the following:

  • Asp.net Web API 2
  • Durandal 2 & Breeze
  • i18Next
  • Aspnet.Identity
  • Bootstrap 3
  • SignalR 2

The only major design element that I’ve yet to work out completely is using ACS and/or ADFS security. The identity implementation for Web API 2 uses a different mechanism (organization accounts) for integrating with ACS and ADFS; so I’ll have to find a way to smash the two options together; and provide enough internal abstractions to where either configuration is possible with minimal additional code.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
1 Comment  comments 
formats

I called it – Ballmer leaves Microsoft

Just over a year ago, I wrote a review of Windows 8, based on the release preview version that shipped last summer. At the end of that piece, I predicted that Steve Ballmer would be forced out as CEO in 2013. It turns out I was right. Ballmer has announced his pending resignation.

This is a perfect time for Ballmer to leave. It’s been long enough since Surface RT and Windows 8′s rocky releases for Ballmer to take most of the negativity around the company’s market missteps into retirement with him. The next big release cycle is still reasonably far off. If it’s a good release, the new CEO will be able to take all the credit for it, and if it isn’t so good then it can still be blamed on Ballmer.

Poor guy. I’ve met Mr. Ballmer, though I’m not familiar enough to comfortably call him Steve. Factually speaking, Microsoft did very well under his leadership. It grew market share, expanded into new markets, and maintains a healthy bottom-line financially. As a person, the most striking thing about Ballmer is that he is a true believer. He believes in Microsoft as a company, he believes in its products, and he believes in its mission. But what impresses me the most, is that he has always believed that Microsoft could be better and do more –he always moved forward with genuine optimism and enthusiasm.

It’s tragic his last act as CEO will be to take all the blame for all the company’s faults. That, my friends, is the kind of self-sacrifice worthy of respect. It will give Microsoft a chance to turn its image around, and to regain its footing in its troubled consumer and mobile segments.

I just hope that whoever takes his place understands what Ballmer’s exit means for the company, and can capitalize on the opportunity. Microsoft still has all the financial, legal, and technical tools that it needs to right itself… all it needs is someone with vision enough to get it done.

BTW Microsoft, I’m open to new opportunities. If you have trouble locating a suitable replacement CEO, just give me a call. I can fix it.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

TicketDesk 3 Dev Diary – MEF, IoC, and Architectural Design

TicketDesk 2 and TicketDesk 3 have some key architectural differences. Both enforce a strict separation of concern between businesses and presentation layers, but there are major architectural differences within each layer. In this installment, I’d like to talk about how the back-end architecture will evolve and change.

TicketDesk 2 – Decoupled design:

The most significant technology that shaped TicketDesk 2′s class library design was the use of the Managed Extensibility Framework (MEF). The use of MEF in TicketDesk 2 was not about modularity, at least not in a way that is relevant to business requirements. TicketDesk 2 was never intended to support plug-ins or dynamic external module loading. I used MEF for two reasons; I was giving test driven development (TDD) another shot, and I had planned to write a Silverlight client for TicketDesk 2.

MEF was originally built by the Silverlight team. It had a lot of potential for other environments, but didn’t play well with MVC back then. It took some dark magic and hacking to just make it work there. MEF is an extensibility framework first, but an IoC container only by accident. While MEF can do the job of an IoC container, it wasn’t particularly good in that role.

As an extensibility framework, MEF actually has more in common with require.js than traditional server-side IoC frameworks. As a Silverlight technology, the primary purpose was to enable clients to download executable modules from the server on demand when needed. This is exactly what require.js does for JavaScript in HTML applications. The truly interesting thing is that TicketDesk 2 did not use MEF in this way at all. Asp.Net MVC is a server-side environment following a request-response-done type execution flow. Deferred module loading isn’t relevant in that kind of environment. TicketDesk used MEF only for its secondary IoC features — runtime composition and dependency injection.

Considering the difficulty in getting MEF working, and the fact that there are better IoC frameworks for MVC, I should have scrapped MEF in favor of Ninject –which has made me very happy in dozens of other projects. I stuck with MEF partly because it would pay off when I got to the Silverlight client, and partly because I liked the challenge that MEF presented.

Sadly, I was only three weeks into development on TicketDesk Silver, the Silverlight client, when Microsoft released Silverlight’s obituary. I had two other projects under development with Silverlight at the time, so that was a very bad summer for me.

The modular design of TicketDesk’s business layer is mostly about testability. EF 4 was quite hostile to unit testing, so I did what everyone else was doing… I wrapped the business logic in unit-of-work and repository patterns, and made sure the dependencies targeted abstract classes and interfaces. If you want to get all gang-of-four about it, the service classes in TD2 are more transaction script than unit-of-work, but it gets the same job done either way. This gave me the level of testability I needed to follow a (mostly) TDD workflow.

One thing I have never liked about heavy unit testing, and TDD in particular, is having to implement complex architectures purely for the sake of making the code testable. I’ll make some design concessions for testability, but I have a very low tolerance for design acrobatics that have nothing to do with an application’s real business requirements.

TicketDesk 2 walks all over that line. I dislike that there are a dozen or more interfaces that would only ever have one (real) concrete implementation. Why have an interface that only gets inherited by one thing? I also dislike having attributes scattered all over the code just to describe things to an IoC container. Neither of those things make TicketDesk work better. It just makes it more complex, harder to understand, and harder to maintain.

On the flip-side, I was able to achieve decent testability without going too far towards an extreme architecture. The unit tests did add value, especially early in the development process –They caught a few bugs, helped validate the design, and gave me some extra confidence.

If you noticed that the current source lacks unit tests, bonus points to you! My TDD experiment was never added to the public repository. I was pretty new to TDD, and my tests were amateurish (to be polite). They worked pretty well, and let me experience real TDD, but I didn’t feel that the tests themselves made a good public example of TDD in action.

TicketDesk 3 – Modularity where it matters:

A lot has changed for the better since I worked on TicketDesk 2.

Some developers still write their biz code in a custom unit-of-work and repository layer that abstracts away all the entity framework stuff; which is fine. But when EF code-first introduced the DbContext, it became much friendlier towards unit testing. The DbContext itself follows a unit-of work pattern, while its DbSets are a generic repository pattern. You don’t necessarily need to wrap an additional layer of custom repository and unit-of-work on top of EF just to do unit testing anymore.

I plan to move most of the business logic directly into the code-first (POCO) model classes. Extension methods allow me to add functionality to any DbSet<T> without having to write a custom implementation of the IDbSet interface for each one. And the unit-of-work nature of the DbContext allows me to put cross cutting business logic in the context itself. Basically, TD 3 will use something close to a true domain model pattern.

As for dependency injection, the need to target only interfaces and abstract types has been reduced. An instance of a real DbContext type can be stubbed, shimmed, or otherwise mocked most of the time. In theory, I should be able to target stubbed/shimmed instances of my concrete types. If I find the need to target abstracts, I can still refactor the DbSets and/or DbContext to inherit custom interfaces. There still isn’t a compelling need to wrap the business logic in higher layers of abstraction.

In TicketDesk 3, I will not be using a TDD workflow. I love unit testing, but am traditionally very selective about what code I choose to test. I write tests for code that will significantly benefit from them –complex and tricky code. I don’t try to test everything. Using TDD as a design tool is a neat thought process, but I find that design-by-test runs counter to my personal style of design. I can easily see how TDD helps people improve their designs, but I personally tend to achieve better designs when I’m coding first and testing later.

When I do get to the need for dependency injection, I plan to run an experimental branch in TicketDesk 3 to explore MEF 2 a bit further. I think they have fixed the major issues that made MEF 1 hard to use in web environments, but it is almost impossible to find good information online about MEF 2. The documentation, when you can find it, is outdated, contradictory, and just plan confusing. What I have found suggests that MEF 2 does work with MVC 4, but still requires some custom infrastructure. What I don’t know is how well it works.

With the need for dependency injection reduced, few compelling extensibility requirements on the back-end, and no plans to do heavy unit testing, I am more inclined to go with Ninject. They care enough to write top-notch documentation, and it was designed explicitly for the purpose of acting as an IoC container… which is the feature set TicketDesk actually needs.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
2 Comments  comments 
formats

TicketDesk 3 Dev Diary – Localization and Internationalization

worldOne of the most frequently requested features for TicketDesk 2 was support for localization. TicketDesk is a stringy application; lots of system generated text that will end up in the user’s face at some point. Localizing TD2 required combing through the source code, line-by-line, translating magic strings by hand.

Clearly, this is not an acceptable approach with TicketDesk 3.

Since localization is thorny, and a weak spot in my own skill-set, I consider it essential to designed for localization as early in the process as possible… and now that the code has gotten to the point where it draws a useful UI, it is time to get started.

In the typical Asp.Net application, localization is mostly just a matter of creating resource files that contain the text translations, then making sure the code only gets text from those resources. There is a lot of support in .Net around localization, cultures, and resource files, so this is pretty easy to do. The only difficult part, for the mono-lingual developer, is getting the text translated into those other languages in the first place.

TicketDesk 3 is a SPA application, which presents a different problem. The UI is mostly HTML and JavaScript, so all that nice .Net localization stuff is unavailable when generating the screens that users will actually see. So the first step was to find a JavaScript library for localization; something that does the same job as .Net resource files. The second challenge was connecting that JavaScript library to the server-side .Net resource files.

Fortunately, there is a fantastic JavaScript library called i18next that fits the bill.

Translations in TicketDesk 3:

i18next follows a pattern similar to server-side .Net resource files. You supply it with json files that contain translated text. Once i18next has the text, it takes care of binding it to the UI via an HTML data-* attributes, or through javascript functions directly. As a bonus, i18next is easy to use in conjunction with knockout’s own UI binding.

TicketDesk performs text functions on the server too, so it still needs resource files, so I wanted to be able to pipe the contents of the resource files to i18next directly, rather than maintaining separate translation files for the server and client. For this, I leveraged Asp.Net Web Api. Configuring i18next to get its json translations from Web Api is simple –just map the URLs it uses to Web Api endpoints.

The Web Api controller itself was a bit more complex. It has to detect the specific language that i18next is requesting, then build an appropriate response in a format i18next can consume. The controller loads a ResourceSet for the requested culture, then loops through the properties to build a dynamic key/value object with all the translated text. Once that’s done, it outputs the dynamic collection as a json response.

i18next has a richer set of capabilities than straight .Net resource files. Resource files are just simple name/value lookups. With i18next, the translation files can have nested properties, replacement variables, and there are features for interpolation (plural forms, gender forms, etc.). These features are available in .Net with some advanced language frameworks, but the basic resource files don’t go that far. Fortunately, TicketDesk only needs the basic set of features, so a flat name/value lookup should be sufficient to get the job done; though it doesn’t leverage some of i18next’s more advanced features.

Localization is more than text translations. TicketDesk also deals with numbers occasionally, and there are some dates too. Fortunately, it isn’t a number heavy application, nor are are there user editable dates or numbers.  The moment.js library easily handles local date display formatting, and numeral.js can handle the couple of numbers.

The main weak point in TicketDesk 3′s localization will be an absence of structural language transformations. Once you get into right-to-left languages and other exotic forms, the physical layout of the entire UI has to change. Sadly, I do not have the expertise to correctly design for such languages.  HTML 5 and CSS 3 do have decent support for this kind of cultural formatting though, so my hope is that anyone needing to localize for these languages can do so themselves without difficulty.

Internationalization:

My intention for TicketDesk 3 was simple localization; the admin would tell the server what language to use, and the system would just generate the UI in that language for all users. I did not initially expect to support dynamic internationalization — the ability to support multiple languages based on individual user preference.

When I got into the details of the i18next implementation though, it quickly became apparent that TicketDesk 3 could easily achieve real internationalization… in fact, internationalization would be about as easy as static localization.

The result is that TicketDesk 3 will be internationalized, not just localized. It will detect the user’s language and dynamically serve up a translated UI for them –as long as resource files exist for their language. If translations for their specific language and culture aren’t available, it will fall back to the best-matching language, or to the the default if no better match exists.

State of the Code:

I have the plumbing for internationalization in place in the alpha already. It auto-detect’s the browser’s language, or you can override it via the query string (e.g. ?setLng=es-MX). Since I don’t speak any other languages, I can’t provide any real translations myself. For the short term, I have created a generic Spanish resource file, into which I copied the English text surrounded by question marks. This isn’t real localization, but it serves to validate that the localization support works correctly.

For dates, I’m using moment.js, so it should adapt to the browser’s language settings automatically, but I haven’t setup moment to use the querystring override yet… I’ll get to that soon though. I’m not doing any number formatting yet, but when I do I’ll implement numeral.js or a similar library.

When TicketDesk 3 gets into beta, and I have the full list of English text strings in resource files, then I will get a native Spanish speaker to help generate a real set of translations. Hopefully, the community will pitch-in to provide other languages too.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure. I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. But feel free to play around with it. To play with localization, either change your browser’s language to something spanish (es, or es-MX, or similar), or use the querystring override: ?setLng=es

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
5 Comments  comments 
formats

TicketDesk 3 Dev Diary – SignalR

One of the overall design goals of TicketDesk since version 1 has been to facilitate near-frictionless, bi-directional communication between help desk staff and end-users. Tickets should evolve as a natural conversation, and the entire history of those conversations should be right there in the ticket’s activity log. TicketDesk has done a reasonably good job in this area, but SignalR presents an opportunity to take this idea to a whole different level.

The goal behind the SignalR library is to give the server a live, real-time channel to code running on the client. The server can push notifications whenever things change, and that information is available to the user immediately. The techniques that SignalR use to achieve this are not entirely new, but have historically been difficult to implement.

TicketDesk 3 uses breeze on the client, and breeze tracks all the entities it has retrieved from the server. Knockout is used to bind those entities to the UI for display. The beauty of this combination is that Knockout automatically updates the UI anytime the entities in breeze change.

With SignalR, the browser can listen in the background for updates from the TicketDesk server. When the server notifies the client that a ticket has changed, the client can then choose to fetch the new data in the background, and update the local copy being tracked by Breeze… and Knockout will automatically refresh the display to show that updated data to the user.

The best thing about SignalR is that it is trivially easy to setup, and with the combination of Breeze and Knockout it is super simple for the UI to respond intelligently.

As a proof of concept, I have coded up a simple SignalR hub that will tell all connected clients when a ticket changes (and what the ID of the changed ticket is). The client will check to see if it is tracking a ticket with that ID, and if so it will automatically fetch a new copy of the ticket from the server. Anything on the screen that is bound to that ticket will automatically update to show the changes. This was not only very easy to implement, but it seems to work very well.

I then took it a step further, and coded up several special routines for the ticket list view. Not only does it update the tickets displayed on screen, but it also responds intelligently to changes in the number of open tickets, or changes of the order of the tickets within the list.

This list view, as currently implemented in the alpha, is a paged list showing 5 items on screen at a time. Because the list is sorted by the last updated date, anytime a ticket is updated the order of items in the list changes too. If a ticket is closed or re-opened, the number of items will grow or shrink change. Pager buttons may need to be disabled or enabled, items on the current page may change as tickets are shuffled around, and the current page might not even be valid anymore if the number of open tickets shrinks enough.

With very little effort, I was able to code up the list view that dynamically responds to real-time changes on the server, and keeps itself current without the user ever needing to explicitly refresh the screen.

I plan to use the set of capabilities around SignalR to make the entire application behave in near real-time. The Ticket activity log will behave like a real-time chat conversations, lists will automatically adjust as things change, and notifications will appear to keep the user informed.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure. I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. But feel free to play around with it.

To see the SignalR behavior in action, just open the site in two browsers at the same time. Make changes to a ticket in one, and watch the other browser update itself.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

TicketDesk 3 Dev Diary – Hot Towel

toweliconFor TicketDesk 3, what I most hope to achieve is an improvement in the overall user experience. Since I wrote TicketDesk 2, much has happened in the JavaScript world. New JavaScript frameworks have matured, and enable deeper user experiences with much less development effort than ever before. TicketDesk is a perfect candidate for Single Page Application (SPA) frameworks, so all I had to do was pick a technology stack and learn to use it.

I have decided to start from the wonderful Hot Towel SPA by John Papa. Hot Towel is a visual studio project template that combines packages from several different client and server frameworks. It includes Knockout.js for UI data-binding, Durandal for routing and view management, and Breeze for talking to the ASP.NET Web Api backend.

My main reasons for choosing Hot Towel are:

  • It is a complete end-to-end SPA Template.
  • It is well documented.
  • The components it relies on are reasonably mature, and well maintained.
  • There are good sample applications built on Hot Towel.
  • John Papa published an excellent, and highly detailed, video course for Hot Towel at Pluralsight.
  • It is very easy to learn and use.

One of the disappointments when switching from server-side Asp.Net to a SPA framework is that the UI is pure JavaScript, HTML, and CSS. It makes almost no use of MVC controllers or views, which always makes me feel like I’m wasting valuable server capabilities. A SPA does make heavy use of Asp.Net Web Api for transferring data, but the UI leaves all that wonderful Asp.Net and Razor view engine stuff behind.

Once I learned by way around Hot Towel, I was surprised to find that working with Knockout, Durandal, and Breeze on the client is much easier than working with Asp.Net on the server. I’m no fan of JavaScript as a language, but the current crop of JavaScript frameworks are truly amazing.

Now that I’ve learned my way around Hot Towel’s various components, I’ve been able to develop a fairly advanced set of UI features very quickly. The current UI is very raw and only provides a primitive set of features, but it has already exceeded my initial expectations by several orders of magnitude.

If you want to take a look at the early alpha version, I have published a TicketDesk 3 demo on Azure . I can’t promise that it will be stable, and it certainly isn’t a complete end-to-end implementation. but feel free to play around with it.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Asp.net 4.5 mvc or webforms – model binding dropdownlist to an enum with description attributes

Published on May 29, 2013 by in Code

One of the more common problems people encounter when working with any .Net user application is the need to put a UI on top of some enumeration. Normally you need to present a friendly list of all the possible items in the enumerator list, then allow the user to pick one of them. This UI typically takes the form of a drop down list, combo box, list box, or similar.

Enums are wonderful in C#, but unlike some other languages, they are also a very thin type. Enums define a collection of named constants. By default, each enumerator in the list equates to an underlying integer value.

Here is an example Enum, and for clarity I’ve specified the value for each enumerator:

public enum CarMake
{
    Ford,     //0
    Chevy,    //1
    Toaster   //2
}

Enums are lightweight, highly efficient, and often very convenient –until you start trying to map them to a UI anyway.

Each of the items, enumerators, within the enum have a name, but the name cannot contain white space, special characters, or punctuation. For this reason, they are rarely user-friendly when converted to a string and slapped into your dropdown lists.

Enter the DescriptionAttribute (from the System.ComponentModel namespace). This attribute allows you to tag your enumerators with a nice descriptive text label, which will fit the UI pretty well if only you can dig the value up. Unfortunately, reading attributes is a cumbersome job involving reflection.

Here is the same enum decorated with descriptions:

public enum CarMake
{
	[Description("Ford Motor Company")]
	Ford,     //0

	[Description("Chevrolet")]
	Chevy,    //1

	[Description("Kia")]
	Toaster   //2
}

To bind up an enum to a drop down list, many developers tend to just manually hard-code the list with user-friendly text values and corresponding integer values, then map the selected integer to right enumerator on the back-end. This works fine until someone comes along later and changes the enum, after which your UI is horribly busted.

To get around this mess, I’ve put together a set of extensions that solves this problem for the common enum to drop down list cases.

Note: I’m using the SelectList class, which comes from Asp.net MVC, as an intermediary container. I then bind the SelectList to the appropriate UI control. You can use SelectList in Asp.net webforms, and most other UI frameworks as well, but you’ll need to implement the code for SelectList. The easiest way to do this is to include the source files for SelectList into your own projects.

The code for SelectList can be found on the AspNetWebStack project page over at CodePlex. Here are the three files needed for SelectList :

The first step in solving the problem is to have an extension method that takes care of reading the description from the enumerators in your enum.

public static string GetDescription(this Enum enumeration)
{
	Type type = enumeration.GetType();
	MemberInfo[] memberInfo = type.GetMember(enumeration.ToString());

	if (memberInfo != null && memberInfo.Length > 0)
	{
		var attributes = memberInfo[0].GetCustomAttributes(typeof(DescriptionAttribute), false);

		if (attributes.Length > 0)
		{
			return ((DescriptionAttribute)attributes.First()).Description;
		}
	}
	return enumeration.ToString(); ;
}

To get an enumerator’s description using this extension method:

string text = CarMake.Ford.GetDescription();

The next challenge is to build a select list for the enum.

public static SelectList ToSelectList(this Enum enumeration, object selectedValue = null, bool includeDefaultItem = true)
{
	var list = (from Enum e in Enum.GetValues(enumeration.GetType())
				select new SelectListQueryItem<object> { 
				   ID = Enum.Parse(enumeration.GetType(), Enum.GetName(enumeration.GetType(), e)), 
				   Name = e.GetDescription() }).ToList();

	if (includeDefaultItem)
	{
		list.Insert(0, new SelectListQueryItem<object> { ID = null, Name = "-- select --"});
	}
	return new SelectList(list, "ID", "Name", selectedValue);
}
internal class SelectListQueryItem<T>
{
    public string Name { get; set; }
    public T ID { get; set; }
}

To get the select list using this extension, you just new-up the enum and call the method:

var carSelectList = new CarMake().ToSelectList();

This extension has an optional parameter to set which item in the list should be selected, and another optional parameter that will include a default item in the SelectList (you may want to adjust this extension to match your own convention for default items).

Here is an example that sets a selected value and includes a default item:

var carSelectList = new CarMake().ToSelectList(CarMake.Ford, true)

I’ve been working in C# for over ten years, and until just this week I had no idea that you could instantiate an enum with the new keyword. Of course I cannot think of reason why you’d normally want to new-up an enum either, but the capability is handy for this extension.

You can also use this extension by calling it on one of the specific enumerator items too. This example does exactly the same as the previous example:

var carSelectList = CarMake.Ford.ToSelectList(CarMake.Ford, true);

I personally prefer the new-up pattern in this case, since it isn’t intuitive that calling ToSelectList on a specifc item would return the list of ALL items from the containing enum.

Now that we have the SelectList, all we have to do is bind it up to a DropDownList.

In Asp.net WebForms 4.5, using model binding, this looks like this:

<asp:DropDownList ID="CarMakeDropDown" runat="server" 
	SelectMethod="GetCarMakeDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# BindItem.CarMakeValue %>" 
/>
protected SelectList GetCarMakeDropDownItems()
{   
	return new CarMake().ToSelectList();
}

For more in-depth examples of various techniques for binding SelectList to DropDownList in webforms, see my previous article titled “Asp.net 4.5 webforms – model binding selected value for dropdownlist and listbox“.

In Asp.net MVC model binding to a SelectList is a very common and routine pattern, so I’m not going to provide a detailed example here. Generally though, you make sure your model includes a property or method for getting the enum’s SelectList, then you use the Html.DropDownList or Html.DropDownListFor helper methods. This would look something like this:

@Html.DropDownListFor(model => model.CarMakeValue, Model.CarMakesSelectList)

 

 

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
2 Comments  comments 
formats

Asp.net 4.5 webforms – model binding selected value for dropdownlist and listbox

Published on May 28, 2013 by in Code

With the Asp.net 4.5 release, webforms inherited several improvements from its Asp.net MVC cousin. I’ve been enjoying the new model binding support lately. Since none of the old datasource controls fully support Entity Framework’s DbContext anyway, model binding is quite handy when using code-first EF models.

For the most part, model binding is straight forward. The basic usage is covered in the tutorials on the asp.net web site, but other resources are rare and hard to find. In the case of DropDownList and similar controls, I found that model binding in webforms was not as straight-forward as I would have thought — especially when trying to set the selected value.

Before I begin, let me explain about the SelectList class.

These examples are valid no matter what data you bind to, but the ItemType that I’m showing in these examples use an implementation of the SelectList class, which I’ve borrowed from asp.net MVC. I don’t like to reference MVC assemblies in my webforms applications, so I just copy in source files to my webforms application. Using SelectList gives you a consistent and strongly typed collection, which tracks each item’s display text, value, and its selected state. SelectList acts as a sort of miniature view-model.

The code for SelectList can be found on the AspNetWebStack project page over at CodePlex. Here are the three files needed for SelectList :

How you bind your dropdownlist depends a lot on if it appears inside some other model bound control or not. To bind a dropdownlist inside of a model bound container (repeater, listview, formview, etc) looks something like this:

*.aspx.cs

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# BindItem.EdCodeType %>" 
/>

*.aspx.cs

protected SelectList GetMyDropDownItems()
{
    var items = from t in someDbContext.AllThings
    select new { ID = t.ID, Name = t.Name };
    return new SelectList(items, "ID", "Name");
}

Note: SelectedValue property does NOT show up in intellisense. This appears to be a bug caused by the fact that this property was marked with the BrowsableAttribute set to false (for mysterious reasons). 

When working with a dropdownlist that is not conveniently nested within a model bound container control, binding the dropdown is still fairly simple. You have three options. You can explicitly declare a selected value, if you know what it is at design-time and it never changes. If that isn’t the case, then you can set the SelectedValue property to the results of some method call, or wire up an ondatabound event handler to set the selected item. Here are the examples:

Declarative example: Set SelectedValue to a known value (rarely helpful):

*.aspx.cs

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="1234" 
/>

Declarative example: Set SelectedValue to the result of a method call:

*.aspx

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	SelectedValue="<%# GetSelectedItemForMyDropDown()%>"
/>

*.aspx.cs

private SelectList myDropDownItems;

protected SelectList GetMyDropDownItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myDropDownItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myDropDownItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myDropDownItems;
}

protected string GetSelectedItemForMyDropDown()
{
	var selected = GetMyDropDownItems().FirstOrDefault(i => i.Selected);
	return (selected != null) ? selected.Value : string.Empty;
}

Event example: Set Selected item from an event handler

*.aspx

<asp:DropDownList ID="MyDropDown" runat="server" 
	SelectMethod="GetMyDropDownItems"
	ItemType="MyApp.SomeNameSpace.SelectListItem" 
	DataTextField="Text" 
	DataValueField="Value"
	OnDataBound="MyDropDown_DataBound" 
/>

*.aspx.cs

private SelectList myDropDownItems;

protected SelectList GetMyDropDownItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myDropDownItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myDropDownItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myDropDownItems;
}

protected void MyDropDown_DataBound(object sender, EventArgs e)
{
	var ddl = (DropDownList)sender;
	var selectedValue = GetMyDropDownItems().FirstOrDefault(i => i.Selected);
	if(selectedValue != null)
	{
		ddl.Items.FindByValue(selectedValue.Value).Selected = true;
	}
}

With the ListBox control, and controls similar, you can employ the same techniques as long as you only allow single item selection. If you need to support multiple selection though, you can’t just set SelectedValue. Instead, you would use the DataBound event to loop each item to selecting the appropriate ones.

*.aspx

<asp:ListBox ID="MyListBox" runat="server"
	SelectionMode="Multiple"
	SelectMethod="GetMyListBoxItems"
	ItemType="Weber.Vfao.Inside.Web.SelectListItem"
	DataTextField="Text" 
	DataValueField="Value" 
	OnDataBound="MyListBox_DataBound"
/>

*.aspx.cs

private SelectList myListBoxItems;

protected SelectList GetMyListBoxItems()
{
	//store the selectlist in a private field for use by other events/methods later
	if(myListBoxItems == null)
	{
		var items = from t in someDbContext.AllThings
					select new { ID = t.ID, Name = t.Name };

		var selectedItems = from t in someDbContext.SelectedThings
					select new { ID = t.ID};

		myListBoxItems = new SelectList(items, "ID", "Name", selectedItems);
	}

	return myListBoxItems;
}

protected void MyListBox_DataBound(object sender, EventArgs e)
{
	var lb = (ListBox)sender;
	foreach (var item in GetMyListBoxItems())
	{
		if (item.Selected)
		{
			b.Items.FindByValue(item.Value).Selected = true;
		}
	}
}
 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
3 Comments  comments 
formats

TicketDesk 3 Dev Diary – Thoughts on Security

hunts point, the bronx, new yo... by andre dos santosIn this post, I’m brainstorming new approaches to handling security in TicketDesk 3, and reviewing the limitations of TicketDesk 2′s old approach.

Security in  TicketDesk 2:

TicketDesk 2 shipped with support for either active directory, or local security. Local security via the built-in membership providers worked reasonably well for most deployments, but the AD option has always been problematic.

Windows authentication, in conjunction with the WindowsTokenRoleProvider, works well enough for the basic question, “who are you?”, but windows authentication severely limits in the amount of information you can obtain about the user. You get the windows account name and a list of the user’s AD groups –and that’s pretty much all you get.

Some features in TD required more than that. It needed display names and email addresses, and it needed to list all users in some AD groups; information the built-in providers are incapable of obtaining.

To support these requirements, TicketDesk 2 used a complicated background processes for AD concerns. It maintains a local cache of AD data to reduce the number of queries, and moved AD query processing off of the user servicing request threads.

Even with the abstractions in System.DirectoryServices, querying AD is painfully obtuse. In addition, response times for AD queries are often measured in the tens of seconds –even in single domain environments. You can’t reasonably query AD in real-time without bringing the entire application to a standstill. The custom module alleviates these issues, but was a  complete nightmare to code and maintain.

The other significant problem has been the difficulty of extending the old system to support more features, such as custom roles and permissions. This is why TicketDesk still uses hard-coded roles with pre-defined permissions –I simply never had the time or energy to re-factor the AD side of the system.

Security technologies of the present:

I’ve played a bit with the SimpleMembership and Universal Providers, which have replaced the original SqlMembership providers. ASP.NET’s security system is still based on the same provider framework that shipped with .Net 2.0 back in 2005. Sadly, the framework is a fundamentally flawed system, and in need of a complete redesign.

In recent years, there have been two attempts to modernize the ASP.NET providers, but neither really addresses the underlying problems of the core framework itself. Both are just extensions layered on top of the original infrastructure.

Universal Providers are an evolutionary upgrade of the original SQL providers from ASP.NET 2.0. They mainly just move data access into Entity Framework, which allows them to work with any database back-end supported by EF. Otherwise, there isn’t a much of a difference between them.

SimpleMembership was created for ASP.NET WebPages, but has become the officially recommended replacement in MVC projects as well. This is the default security provider in the newer MVC project templates that come with Visual Studio 2012. As I wrote in another post though, this system has some deep design flaws of its own. You almost have to learn how to hack up a customized version of SimpleMembership just to make it useful in any real-world scenario.

The good news is that both SimpleMembership and Universal Providers include oAuth support backed by the popular DotNetOpenAuth libraries. So, while flawed and build on a shaky foundation, at least they have been modernized.

Traditional windows authentication itself has not changed in any significant way, and there have been no official replacements for the windows security providers that I know of. ASP.NET still uses the same windows provider options that shipped in .Net 2.0, all of which are spectacularly terrible.

The major action for Active Directory authentication has been with Active Directory Federation Services. This is an STS token server that exposes AD to clients that support WS-federation. The Windows Identity Foundation (WIF), originally introduced in WCF, was pulled directly into the core of the .Net framework with the 4.5 release. This provides the infrastructure ASP.NET needs to leverage federated authentication and authorization. So, if you setup federation in ASP.NET you can easily connect it to ADFS to authenticate your AD users. Best of all, ADFS can share any information stored in AD.

In Windows Azure land, you have the Access Control Service (ACS), which is also a WS-Federation token service. From ASP.NET, connecting to ACS is similar to connecting to ADFS. The real advantage of ACS is that it can act as an aggregator and proxy for any number of other external authentication sources. For example, you can connect your ACS to your domain’s ADFS service and to oAuth providers.

The new identity features in .NET 4.5 also incorporate claims based identity as a pervasive and normative feature used throughout the entire framework. You don’t have to use WS-federation in order to use claims based identities, since all identities are now claims based.

Ideas for TicketDesk 3:

For TicketDesk 3, I hope to support the following authentication options:

  • local user accounts (forms with user registration)
  • oAuth (sign-in with google, facebook, twitter, etc.)
  • Windows (AD or local system accounts)
  • WS-Federation (ADFS, Azure ACS, etc.)

My challenge is to find a way to bring all these pieces together in TicketDesk 3 in a sane way.

I want TicketDesk’s business logic to make use of claims based identity for its authorization decisions. Since claims are baked in, this seems to be the most sensible way of doing things. The key will be to map all users, regardless of how they are authenticated, to a common set of claims that TicketDesk understands. The business logic can then rely on having those claims attached to every user’s identity.

For authentication, I have little choice but to rely on existing mechanisms. I am not a crypto expert, and have no intention of rolling my own password and user verification code. But, to support multiple authentication sources, I will likely need to roll my own module to intercept user requests, and dynamically route them to the appropriate pre-built authentication mechanism as needed at runtime.

Each of the possible authentication systems will produce a ClaimsIndentity, but the structure and content of those claims varies. Some authentication generates very few usable claims, while others may contain all the claims TicketDesk could ever need. To bridge the gap, TicketDesk will need an identity transformer. This component will map incoming identities, obtained during authentication, to a TicketDesk identity with all the claims TicketDesk needs. Since some claims cannot be obtained during authentication, TicketDesk will need to obtain any missing data from the user, or an administrator, then store and manage that data locally.

As you can tell, this is all in the early conceptual stages. I’m doing a lot of experiments and research, but I’m at the point where I have a good grasp of most of the pieces that I’ll need to smash together.

I have several other projects with similar security requirements, and I’ve grown tired of approaching each new project with the question, “how much hacking will ASP.NET membership need this time?” So, no matter what I come up with, I hope to roll it into a re-usage package.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
3 Comments  comments 
formats

ASP.NET MVC SimpleMembership – Initilization and New Project Template Issues

The ASP.NET MVC new project templates have been upgraded in Visual Studio 2012. Among the many improvements, was the change to using SimpleMembershipProvider. Simple they may be named, but I’ve found using it to be anything but simple. There are some deep design flaws in the provider, as well as how it is used in the stock MVC project templates.

The issues start with web.config. If you have worked with asp.net in the past, the most obvious difference in new web.config files will be the lack of configuration elements related to security providers.

Old templates had something like this:

<system.web>
       ... stuff ...
    <membership>
        <providers>
            <add name=".... />
        </providers>
    </membership>
    <profile>
        <providers>
            <add name=".... />
        </providers>
    </profile>
    <roleManager>
        <providers>
            <add name=".... />
        </providers>
    </roleManager>
    ... stuff ...
</system.web>

The new templates have none of these elements. There is a default connection in web.config, called DefaultConnection, and the templates use this connection for membership as well application data by default.

So far, so simple.

If you need to change the default behavior of SimpleMembership though, where do you make those changes? For example, I don’t like the default password length requirements, so I’ve always changed that in config. You can explicitly add the SimpleMembership provider to web.config, but doing so is useless. SimpleMembership just ignores settings in web.config. If you want to change the behavior, you have to track down the entity classes SimpleMembership is using (models/AccountModels.cs in the stock templates), and manually change the attributes in code.

While modifying settings in code isn’t hard, it is a truly horrible design. Your security configuration is scattered all over your model classes, and the runtime security policy is buried deep within compiled code, instead of a centralized configuration file where your server admin can get at it. This design also eliminates the option of using config transformations during build, publish, or deployment to customize the security policy for different target environments.

If you want to change the connection used by SimpleMembership, you no longer do this through web.config either. Instead, you have to track down the code that initializes the security system, which is inconveniently located in the /Filters/InitializeSimpleMembershipAttribute.cs file. Here, you need to update the call to WebSecurity.InitializeDatabaseConnection to pass in the correct name of your connection string. Not very intuitive, but that’s not the truly weird part either…

The weird part is that initialization for SimpleMembership, in the stock project templates, is done from an ActionFilterAttribute. This attribute decorates the AccountController class, and only that class. The result is that none of the SimpleMembership system will be initialized until a real user physically visits one of your account pages; login, register, or whatever.

If you write code in a page that allows anonymous access, or in a background or start-up routine, and it tries to do something related to membership, roles, or profiles you will get bizzare behavior. You would think that the code would just fail –the providers haven’t been initialized right?

The truth is much, much worse.

If code tries to do security stuff before SimpleMembership is explicitly initialized, asp.net will initialize the default security providers automatically. Instead of the SimpleMembership provider that you expected, it will initialize a SqlMembershipProvider using a connection named “LocalSqlServer”. This stuff is configured in the machine.config file, which includes the default configuration settings for SqlMembership, and points it at the local SQLExpress instance.

So, lets say you open up your new MVC project, which you created from the VS 2012 default template. You go to the HomeController, and edit the Index action to call Membership.GetAllUsers(). That’s all. You don’t need to do anything with the users, just ask for a list of them.

Then you run your application.

What do you think happens?

If your system is setup like most, you probably have SQL express installed. So, suddenly a mysterious aspnet.mdf file shows up in your app_data folder, and gets attached to your SQLExpress instance… and this all probably happens without you noticing it. If you don’t have a SQLExpress instance, your application will hang on startup for a long while, then timeout with “The system cannot find the file specified”.

I don’t know about anyone else, but I REALLY hate when an application starts making new databases on SQL servers that it shouldn’t even know exist.

Overall, I am not impressed with SimpleMembership, nor how the stock templates implement it. The providers are prone to unexpected and unintended behavior unless you have deep knowledge about their internal workings. Customizing SimpleMembership is the opposite of simple. And for configuration, it is confusing that it plugs into a standard asp.net feature, then ignores the traditional means of configuring that feature. And to top it off, requiring basic security configuration to be handled in code is a severely deranged design choice.

 
Tags: ,
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
1 Comment  comments 
formats

TicketDesk 3 Dev Diary – EF Migrations & TD2 Upgrades – Part 3

dead-birdsIn this last installment, I’ll describe TicketDesk’s custom database initializers, which allow TicketDesk 3 to automatically choose which migration should run on startup. I’ll also confess to a major development blunder!

In my last post, I talked about creating the initial migration for new TicketDesk databases. I used the MigrateDatabaseToLatestVersion<T> initializer in global.asax.cs. This initializer works well for normal EF migration usage patterns, but TicketDesk needs initializers with a bit more spunk.

EF also ships with a feature called “automatic migrations”. I will NOT be talking about this feature here. Automatic migrations are, in my opinion, an abortion-in-progress for anything but the most trivial of applications. I advise you to read up on this feature, and then do your best to avoid it. What I will be talking about here is automatically executing a “code-based migration”.

Great primer on Automatic migrations here, and for Code-base migrations here.

Initializers:

A common technique for customizing database initialization is to write a custom initializer that inherits one of the standard initializers. Sadly, the MigrateDatabaseToLatestVersion initializer is ill-suited for inheritance. There are no overridable methods or extension points to plug into. Instead, I’ll roll my own initializers, implementing IDatabaseInitializer<T>, and take direct control over the migrations process. Fortunately, the EF APIs makes this pretty simple.

Initially, I built one big do-it-all initializer, but later switched to separate initializers that operate independently — in keeping with the SRP.

On Application_Start, TicketDesk will run both initializers. Here is how the process should go:

1) Legacy Initializer Runs
    1a) Check if there is a database
    1b) If a database exists, check the version number (stored in the dbo.settings table)
    1c) If version 2.0.2, run the legacy initial migration (else, do nothing)
2) Regular Initializer Runs
    2a) Run all standard migrations

While the initializers don’t need a mediator, or other coordinating object, the order in which they are invoked is relevant to achieve the desired behavior.

The trick to step 2 is that, if the DB was upgraded in step 1c, then the regular migration will think its own initial migration has already run (remember, both use the same ID and hash value). The regular initializer will still run all additional migrations added later, even if the legacy initializer did the initial migration.

The initializers themselves are rather simple. They instantiate EF’s DbMigrator, which is the code equivalent to using the Package Manager Console’s Update-Database command.

The regular initializer:
public class TicketDeskDatabaseInitializer : IDatabaseInitializer<TicketDeskContext>
{
    private string ConnectionName { get; set; }
    public TicketDeskDatabaseInitializer(string connectionName)
    {
        ConnectionName = connectionName;
    }

    public void InitializeDatabase(TicketDeskContext context)
    {
        var config = new Configuration();
        config.TargetDatabase = new DbConnectionInfo(ConnectionName);
        var migrator = new DbMigrator(config);
        migrator.Update(); //run all migrations 
    }
}
The legacy initializer:
public class LegacyDatabaseInitializer : IDatabaseInitializer<TicketDeskLegacyContext>
{

    private string ConnectionName { get; set; }

    public LegacyDatabaseInitializer(string connectionName)
    {
        ConnectionName = connectionName;
    }

    public void InitializeDatabase(TicketDeskLegacyContext context)
    {
        //if existsing TD 2.x database, run upgrade; creates migration history table 
        if (context.Database.Exists() && IsLegacyDatabase(context))
        {
            var upgradeConfig = new Configuration();
            upgradeConfig.TargetDatabase = new DbConnectionInfo(ConnectionName);

            //this will do nothing but add the migration history table 
            //  with the same migration ID as the standard migrator.
            var migrator = new DbMigrator(upgradeConfig);
            migrator.Update("Initial"); //run just the initial migration
        }
    }

    public static bool IsLegacyDatabase(TicketDeskLegacyContext context)
    {
        // TicketDeskLegacyContext has no DbSets, directly execute select query
        var oldVersion = context.Database.SqlQuery<string>(
                  "select SettingValue from Settings where SettingName = 'Version'");
        return 
        (
            oldVersion != null && 
            oldVersion.Count() > 0 && 
            oldVersion.First().Equals("2.0.2")
        );
    }
}

The Accidental Integration Test:

You will note that the constructor for both initializers requires an explicit connection name, which is used to set the target database for the migrator. This is the result of another oversight in how EF migrations were implemented internally –I consider it an oversight anyway.

The initialize method takes an instantiated DbContext as a parameter, but there is no way to pass that context to a DbMigrator. Instead, DbMigrator always creates a new instance of the DbContext, and it always uses the DbContext’s default constructor. So, in cases where you want to use non-default connections, you must explicitly pass that information into the DbMigrator, otherwise it will use whatever connection your DbContext’s default constructor uses.

I discovered this issue when trying unit test the legacy initializer. My unit test used a custom connection string when it instantiated the DbContext (pointing to a localdb file database in the unit test project). I would run the test, but the DB in the test project refused to change.

Eventually, I discovered that the migrator was making a new DbContext from the default constructor, which in this case was hard-coded to a default connection named “TicketDesk”. This connection string was present in my app.config file, but I had, unwisely, left it pointed at my production database server…  the real TicketDesk 2 database that my company’s IT staff uses!

Yikes!

The test was, indeed, migrating the crap out of my production database!

As a testament to the migrator’s effectiveness, and the similarity between the TD3 and TD2 schemas, TicketDesk 2 didn’t skip a beat. The users never knew their DB had been upgraded, and everything kept working. Later, I manually un-migrated the production database using the PM console commands, which also worked without a hitch.

Lesson learned! Again…

Planting Seeds

Another interesting facet to the design of EF Migrations is how it handles seed data. For migrations, seeding is done through an override method in the migration configuration class. The odd thing about this seed method is that it runs every time migrations are run, even if there are no pending migrations to apply to the database. In TicketDesk’s case, both seed methods will run (from legacy and regular configuration classes).

In my opinion, there are two kinds of “seed” data –there is “default data” which is data that should be pre-set when the DB is created, then there is “test data”, which is used for testing or just pre-loading some example data. EF uses “seed data” in the sense of “test data”.

As a practical example of the difference, TicketDesk has a bunch of default settings that are stored in its settings table. These values are required, or the application fails, but admins can change the default settings later if they choose. This is not, in the sense EF uses the term, “seed data”. It is “default data”, for which EF has no explicit support.

For default data, the best solution seems to be inserting the data during the migration itself –by adding insert operations to the migration class’s up and down methods. Unfortunately, there aren’t any convenient helper methods for doing inserts, so you have to issue raw SQL insert commands as strings.

For “test data”, you can rely on the seed method in configuration, as long as you are comfortable with your data being reset every time the application starts. At least you can use Linq helper methods and your strong-typed entities there.

At this point, I have working database initialization and full migrations support in TicketDesk 3, at least for the core model entities. Security, and security database support, may prove more challenging.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

TicketDesk 2.1.1 – Official Release

TicketDesk 2.1 LogoTicketDesk 2.1.1 has been officially released at codeplex, and the source code has been merged and pushed to the public mercurial repository.

This is a platform refresh of the TicketDesk 2 project. The source code now supports development with Visual Studio 2012, and the application has been updated to target the .Net Framwork 4.5 and Asp.net MVC 4.

The databases have not been modified, but have been verified for compatibility with SQL 2012, including localdb.

There are no new user-facing features in this release.

 
Tags:
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
2 Comments  comments