Decaying Code

Where code comes to die

Tip & Trick : Intellisense for Javascript in Visual Studio 2010

I know pretty much everyone already knows that but I would love to remember everyone how to get Intellisense working in Visual Studio 2010.

I mainly use jQuery and the API is huge. A bit huge to remember sometimes and I always have the documentation opened in a browser window. To alleviate my pain, I love to use the Intellisense and here is how you get it to work.

First, open your javascript file. Then, drag and drop the file you want to reference at the top of the document.

That's it. You now have Intellisense if a "vsdoc" of your library is available. I'll even throw you something more for you to enjoy. When you declare an event handler for a jQuery element and that you want to access the "event" element. That event is of type jQuery.Event. Just add "/// <param name=variableName type=jQuery.Event />" right after the declaration and it will enable the Intellisense on that variable.


View Source disabled in Internet Explorer?

Found the fix after scouting the forums.

The main reason is because the caching of SSL pages are disabled and are not on disk. Somehow, IE doesn't allow you to view the source of those pages.

To fix the issue, open the registry and go to the following key:


There should be a REG_DWORD value with the name "DisableCachingOfSSLPages". The value should be set to "0x00000001". Change it to "0x00000000" and restart Internet Explorer.

This should allow you to view the HTML of your SSL pages when working in a secure environment.

Quick introduction to SOLID

For those who don't know SOLID, it's an acronym of acronym.
SOLID stands for the following:
SRP: Single Responsability Principle
OCP: Open/Closed Principle
LSP: Liskov Substitution Principle
ISP: Interface Segregation Principle
DIP: Dependency Inversion Principle
Bringing those principles together is creditted to Robert C. Martin (AKA Uncle Bob).
You can read about SOLID further on Wikipedia or on Uncle Bob's website.
Please note that I'm also in the process of writing some nice posts on how to apply them to your code.
So stay tuned!

Back to basics: Why should I use interfaces?

So I had this interesting discussion with a colleague about having a clean architecture for a small software he is doing. Since it's his first step among SOLID, I wanted to take it easy see how things were laid out. Since the program was mostly already written, I immediately noticed the lack of pattern and the direct data access in the event of his WinForm application. The conversation went a bit like this:

Me: What is this code with data access on the "OnClick" of your button? Him: Well it's the information I need to execute this command. Me: Do you know the Model-View-Presenter pattern? Because right now, you are mixing "Presentation", "Data Access" and "Business Logic" Him: I've used it before but it's been a while. How do you implement it?

So after showing him the pattern and explaining the basic implementation (because there is a lot of different way to implement this pattern), he asked me the following question:

"Of course, you don't need to use interface everywhere, right?"

But then I went on to explain testability and such but there is something different I wanted to bring to this small discussion and that I wanted to share a bit. When my class have dependencies injected through the constructor, I have 2 choices. Either I depend upon the implementation or the abstraction (interface/abstract). What's the difference and why is it so important?

MyClass depending upon the abstraction of "MyClassDataAccess"

When your class depend upon the abstraction, it can take any class that implement that abstraction (be it abstract class or interface). The implementation can easily be replaced by something else and that is essential in unit testing your logic.

MyClass depending upon the implementation of "MyClassDataAccess"

When your class depend upon the implementation directly, the only that can be sent to this class is this specific implementation. Anything else must derive from this class. This implementation couple the Caller and the Callee really tightly.

Why is it important ?

When you have a class that access services, slow resource (database, disk, etc.) or even a class that you haven't coded yet... an interface should be used. Of course it's not a law. You apply interface/abstract class when you need to decouple an implementation of a system from another system. That allows me to send mocked object and test my requirements/logic. This also bring another advantage that might not be evident at first. Customer changing his mind. When the customer change his mind and do not want to store information in an XML but instead want a database. Or when a customer say to not implement "this part of the system" because it will be available through a service. Etc Etc... Using interfaces and abstract class is the oil that makes the engine of your software turn smoothly and allow you to replace parts by better/different parts without hell breaking loose because of tightly coupled implementation.

ASP.NET MVC - How does the Html.ValidationMessage actually work?

When you create a basic ASP.NET MVC application, you normally will have "Html.ValidationMessage" inserted automatically for you in the Edit and Create views. Of course, if you try to type strings in a number field, it will fail. Same things for dates and such. The good question now is... how does it do it?

Well, the ValidationMessage method only look to see if the model you gave him with the name given have received errors. If it did, it will display the specified message. So now that we covered the "How", I'll show you where it does that.

The answer lies within the DefaultModelBinder that comes activated by default with ASP.NET MVC. The ModelBinder do a best guest to fill your model with the values sent from a post. However, when it can match a property name but can't set the value (invalid data), it will catch the exception and add it as an error in a ModelStateDictionary. The ValidationMessage then picks up data from that dictionary and will find errors for the right property of your model.

That's it! Of course it's pretty simple validation and I would still recommend you to use a different validation library. There is already a few available on the MVCContrib project on CodePlex.

Simple explication of the MVC Pattern

Since the last time I wrote a blog post was more than a few months ago, I would like to start by saying that I'm still alive and well. I had changes in my career and my personal life that required some attention and now I'm back on track.

So for those that know me, I was participating to the TechDays 2009 in Montreal and presenting the "Introduction to ASP.NET MVC" session. I will also be presenting the same session in Ottawa (in fact this blog post is written on the way to Ottawa with Eric as my designated driver).

So what is exactly ASP.NET MVC? It's simply the Microsoft's implementation of the MVC pattern that was first described in 1979 by Trygve Reenskaug (see Model-View-Controller for the full history).

So in more details, the MVC is the acronym of Model, View and Controller. We will see each component and the advantages of having them separated properly.


The model is exactly what you would expect. It's your business logic, your data access layer and whatever else is part of your application logic. This I don't really have to explain. It's where your business logic will sit and therefore should be the most tested part of your application.

The model is not aware of the view or of the controller.


The view is where sit your presentation layer for your application. In a web framework, this is mostly ASPX pages with logic that is limited to showing the model. This layer is normally really thin and only focused on displaying the model. The logics are mostly limited to encoding, localization, looping (for grids) and such.

The view is not aware of which controller invokes it. The view is only aware of the model to display.


The controller is the coordinator. It will retrieve data from the model and hand it over to the view to display. The controller can also be associated to other cross-cutting concerns such as logging, authorization and performance monitoring (performance counter, timing each operations, etc.).


Now, why should you have to care about all that? First, there is a clear cutting separation between WHAT is displayed to the user and HOW you get the information to display. In the example of a web site, it become clearly possible to display different views based on the browser, the device, the capabilities of the device (javascript, css, etc...) and any other information available to you at the moment.

Among the other advantages, it's the ability to test your controller separated from your view. If your model is properly done too (coding against abstraction and not implementations), you will be able to test your controller separated from your model and your view.


Mostly a web pattern than a WinForm pattern. There is currently no serious implementation of the MVC pattern for anything else other than web frameworks. The MVC pattern is hence found in ASP.NET MVC, FubuMVC and other MVC Framework. Thus it limits your choices to the web.

If you take a specific platform like ASP.NET MVC, other disadvantages (that could be seen as advantages) slips in. Mostly, you lose any drag and drop support for server controls. Any grids are now required to be hand-rolled and built manually instead the more usual abstraction offered by the original framework.


Since we mostly require to have a more fine grained control over our view, the abstraction offered by the core .NET Framework are normally not extensible/customizable enough for most web designers. Some abstraction might even become unsupported in the future and thus bringing us to a more precise control of our views. The pattern also allow us greater testability than what is normally offered by default in WebForms (Page Controller with Templating Views).

My recommendation will effectively be a "it depend". If an application is already built with WebForms and doesn't have any friction, there is no point in redoing the application completely in MVC. However, for any new greenfield project, I would recommend at least taking a look at ASP.NET MVC.

Back from vacation and personal changes

Alright! It's been a while. Here is what happened during all that time. I went on vacation in August and I had some changes in my life on the way back.

Just to inform everyone now that the posting should be more common. I'm planned to be on the Visual Studio Talk show somewhere by the end of October. Also, don't miss me at the TechDays 2009 in Montreal and Ottawa! Some pretty nice subjects are talked about!

Talk to you all later!

Is your debugger making you stupid?

What is one of the greatest advance of Visual Studio since the coming of .NET? You might think it is the Garbage Collector or the IL which allows interoperability between languages? I think of one the great advance of Visual Studio 2003 (all the way through Visual Studio 2010) is the debugger. Previously, debugger were hardly as powerful as Visual Studio. And that is the problem.

What is debugger?

The quote from Wikipedia is "a debugger is a computer program that is used to test and debug other programs". The debugger is used to find bugs and figure out how to fix them.

The debugger is used to go step by step, step backward, step-into, etc. All this, in the hope of reproducing a bug.

Why does it make me stupid?

It might be one of the most powerful tools that you have under your hand. But it's also one of the most dangerous. It encourages you to test your software yourself without having tools do the job for you. Once you are done debugging a module, you will never debug it again unless there is a bug in it. Then you start building around this module and you test the new modules against the first module. And then starts the fun. You start modifying the first module but don't test again the first scenario that you built. Now, the next time another developer build something based on the same module, there is two thing that can happen. The first is that the developer is going to be afraid to change the module and will duplicate the code in another module to make sure he doesn't break anything. The second option is that the developer is  going to change the module anyway and rerun the application to make sure he didn't break anything obvious. Okay, there is a third option that consist in adding test to address the new behaviour but we're not interested in the good stuff. Just the bad.

Ripples of the Debugger

Okay. After I just told you this nice little story, what do you think will happen in the future? As the other developers goes into the code, they will add over the modules again and again. As the modifications keep on coming, the module will keep on changing. As the changes goes "debugger tested" only, bugs will start to appear in modules that never had bugs before. To "test" the right behaviour, the team will start to add test script to execute manually to make sure no bugs are left behind. This will require interns or QA people to run the test.

The solution?

Infect your code with test and stop using the debugger. That's simple. I know that Ruby doesn't have a debugger integrated inside the main UI it's using. Ruby developer still manage to deliver quality code without the integrated debugger. In fact, lots of developer manage to make great software without debugger. Running without debugger and test however is NOT the solution. You must ensure that your code is covered with code as much as possible. When you find a bug, make a test that reproduce the bug and fix the production code. As your code gets tested, additional modules will not make break unless they break a test. This is the solution. This is the way to make good and clean code.

The cost of Bad Code

Every developer writes code. Every developer works or has worked on a Brownfield project. Working on a Brownfield project often makes developer complain about the code being poorly written and hard to maintain. That surely sounds familiar, right?

This is basically a pledge to good code. Bad code makes things worse and cost business money.

How much are we talking about?

There is no scientific study about this. Primarily because most projects are private and won't allow studies and there is still no clear metric that represent clean code. Mostly, metrics can't represent bad code. So how much money can be saved? Well... bad code hinder maintenance, comprehension and scare programmers of changing a class that was working well before. I don't think we can calculate now, but I think that Cyclomatic Complexity, LOC per functions and Code Coverage represent a big indicator of code that are hard to understand and difficult to make changes.

Code that have high cyclomatic complexity and huge LOC per functions scares programmers into making changes. Why? Because we all know that if we change something inside one of those method, the ripples of change will make something else break. This fear can be neutralized by high code coverage of those big methods and/or by splitting them up.

Time for totally unscientific numbers. I think that complex code will require more double the time to make modifications to. Why? Well... let's say that the developer will have to spend a considerate amount of times in the debugger instead of running tests. Tests for a (big) module should take less than 10-15 seconds to run (including the test runner initialization). Debugging the same module to verify a behaviour will take normally a minute or two. Rinse and repeat at least a dozen times and you find yourself at 1 minutes for running the tests and 12 minutes for debugging an application. This is just the beginning. If there is no tests, a huge and complex method will take literally take at least 10 minutes to understand (depend of context). A test "infected" code base will allow for quick failure verification without having to spend hours in the debugger. Calculate as much as you want but... as Robert C. Martin said:

The only way to go fast is to go well.

So are you saving time in your company or are you costing your company money? I think we can all earn something from writing clean code. Companies will save on maintenance cost, programmer will improve their craft and become better programmer that are proud of what they do.

Improving code quality - 2 ways to go

I've been thinking about this for at least a week or two. In fact, it's been since I started (and finished) reading the book "Clean Code" by Robert C. Martin. There is probably only two way to go.

Fix the bad code

This method is called refactoring and "cleaning" the code. This of course, you can't truly know what code is bad without having a Static Analyser tool or programmers working on the code. The tool will allow you to spot piece of code that could bring bugs and/or be hard to work with. The problem, refactoring code or cleaning up code is really expensive on a business perspective. The trick is to fix it as you interact with the code. It is probably impossible to request time from your company to fix code that could cause bugs. If you ask your company to fix the code, you will probably receive this answer: "Why did you write it badly in the first place?". Which brings us to the other way to improve the code quality.

Don't write it

If you don't write the bad code in the first place, you won't have to fix it! That sounds simple to an experienced programmer that improved his craft with years but rookies will definitely leave bad code behind. Eventually, you will have to encounter this bad code. So how do you avoid the big refactoring of mistakes (not just rookies)? I believe that training might be a way to go. When I only had 1 year of experience in software development, I was writing WAY too many bad code. I still do. Not that I don't see it go through. Sometimes, things must be rushed, I don't understand fully the problem and some small abstraction mistakes gets in. I write way less bad code then when I started. However, this bad code is not just magically disappearing. It's stays there.

What about training?

I think that training and/or mentoring might be the way to go. Mentoring might be hard to sell but training is definitely not that hard to sell. Most employees got an amount of money related to their name within a company that represent training expenses that can be spent on them. What I particularly recommend is some courses in Object-Oriented design or Advanced Object-Oriented Design. Hell, you might even consider an xDD course (and by xDD... I mean TDD, BDD, DDD, RDD, etc.). Any of those courses will improve your skill and bring you closer to identifying clean code from bad code. Other training that will form you specific framework (like ASP.NET MVC or Entity Framework) will only show you how to get things done with those framework. The latter can be learned on your own or through a good  book.

So? What do you all thinks? Do you rather have a framework course or a "Clean Code" course?