How to add custom build step to a TFS Server Build ?

Most of the time when you are creating a build script (TFSBuild.proj), you need to do some steps after the build. Wether it’s creating an MSI for easier deployment, creating a VSI for a Visual Studio Add-in, or whatever if may be… you normally do a post build.

A post build event looks like the following inside the TFSBuild.proj :

1
2
3
4
5
6
7
<Target Name="AfterDropBuild">
<CallTarget Targets="PostBuildStep" />
</Target>

<Target Name="PostBuildStep">
<!-- Do something -->
</Target>

When you only have 1 or 2 tasks and that one fails, it might be easy to find the one that failed. What if you have 8 to 20 tasks? It then becomes incredibly hard to find which one failed. What I’ve seen the most is normally some ““ tags with some descriptive tasks. This is the equivalent of debugging with Console.WriteLine or Debug.Print.

What if you could know EXACTLY which task failed to run? Here is a way to add a custom build step to your TFS build which will allow you to easily know what crashed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<Target Name="PostBuildStep">
<!-- Create the build steps which start in mode "Running" -->
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="Doing Something on a PostBuild Event" Condition=" '$(IsDesktopBuild)' != 'true' ">
<!-- Return the ID of the tasks and assigned it to "PropertyName" -->
<Output TaskParameter="Id" PropertyName="PostBuildStepId" />
</BuildStep>

<!-- Do something -->

<!-- When everything is done, change the status of the task to "Succeeded" -->
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(PostBuildStepId)" Status="Succeeded" Condition=" '$(IsDesktopBuild)' != 'true' " />

<!-- If an error occurs during the process, run the target "PostBuildStepFailed" -->
<OnError ExecuteTargets="PostBuildStepFailed" />
</Target>

<Target Name="PostBuildStepFailed">
<!-- Change the status of the task to "Failed" -->
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(PostBuildStepId)" Status="Failed" Condition=" '$(IsDesktopBuild)' != 'true' " />
</Target>

With that in place, you will see exactly which task failed. As a bonus, it will also give you the time at which it completed. This will easily allow you to compare with your other task to see which one is taking the most time.

I would like to thank Martin Woodward which is a TeamSystem MVP. The question originated from Stackoverflow and more details are also available on Martin’s website.

Why should I use mocking objects in my Unit Test?

If we cut out any “fanboy” or favouritism toward certain framework and that we try to keep it in a one liner… I would say: “To simulate behaviours of objects that are impractical or impossible to incorporate inside a unit test”.

The Wikipedia’s article about Mock Object mention some reason an object should be mocked.

The object…

Supplies Non-Deterministic Results

By “non-deterministic” we mean everything from time, currency rate, shipping rate, etc. Any value that could be changing because of a specific implementation such as algorithm should be mocked. Mocked object allows you to return predetermined value that are independent of the algorithm/time/etc.

This allows to more easily test the state of the System Under Test (SUT) after running some methods.

Has States that are difficult to create or reproduce

The example given by Wikipedia is “network error”. It’s difficult to reproduce this kind of situation on every developers station. Other situation might include security, location of the test on disk, network availability (not just errors). If some objects that the SUT is using require any of those, the tests WILL fail somewhere and somehow. If it’s not on a developer’s machine, it’s going to be on the build machine.

Mocking those objects and giving them proper behaviour will remove any required “settings” that are necessary to run a Unit Test.

Is Slow

Databases, network, file access (up to a point) are all slow. If your SUT is an ObjectService that is using a Repository and you are hitting directly on the databases, it is bound to be slow. Of course the database can cope with it. But as you had more tests, the unit will soon take HOURS to run. With a small in-memory database will save the day and run those tests in less than a few minutes.

A mocked repository might just keep a collection of saved object so that when a “Get” method is called, it’s readily available in this collection. This kind of mock is called a “fake” in the world of Mocking. It implements more complex behaviour but allows for easy initialization and more timely responses.

Does not yet exist or may change behaviour

If the only thing that is currently available within your system boundaries are contracts (Interfaces in C#), it’s more easy to mock the interface that the SUT is requiring and go with this temporarily while the component is being developed. This allow testing and coding at the same time.

Conclusion

Mocking is an excellent tool to test a specific object under controlled conditions. Of course, those conditions are bound to change and tests are going to be maintained. Why I particularly like is when I use a mocking framework, I don’t need to create 1000+ objects (exaggerating here) with some specific behaviours or create “too intelligent” mock that will have to be maintained. I dynamically declare a mock with my favourite mocking framework with the expected call and the expected returns and I go through with that.

What normally happens is I have considerably less mocking objects inside my Unit Tests and the only objects that are left standing are some in-memory database objects with simple implementation that would be to hard to define with a mocking framework.

Is blogging making people blog?

I have a colleague that have his own blog. When I started blogging, it was because of Jeff Atwood’s post about starting to write and him having say on StackOverflow’s podcast that he’s not always right but that he have an online presence and that he’s loud.

I agree. Having a blog and writing it (useful or not) is a way to broadcast yourself. Having a blog (depending on what you it) can be good for your career and can also serve as a way to store your personal knowledge that you gained through out projects.

I’m happy that I started my blog. It’s kind of a technical diary and I’m more than happy to write to it when I have a moment.

To go back to my colleague, he’s a nice guy with a huge technical knowledge. What I’m proud of is this. I started my blog in 2009 and he had his since 2007. During 2007 and 2008 he wrote an average of 5-8 post per years. When I started my blog, he followed me on my journey and we started comparing Google Analytics total number of visits. That’s when the friendly competition started.

He knew that I would win easily if he wasn’t going to post something interesting. So he started following me. That’s why I say that blogging is contagious.

Now instead of making around 8 blog post a year, he makes 8+ a month.

Is that THAT contagious or are we the only duo of blogger like that?

When would you use delegates in C#?

This is a valid question. Before C# 3.0, you could use delegates or declare full methods to bind to events. Now we can declare event directly through lambda. (See this post on many different examples on how to bind event handlers).

Jon Skeet answered me the following:

  •  

    • Event handlers (for GUI and more) Starting threads Callbacks (e.g. for async APIs) LINQ and similar (List.Find etc) Anywhere else where I want to effectively apply “template” code with some specialized logic inside (where the delegate provides the specialization)

Delegates is a keyword that can be used to declare inline methods. This inline code can be stored inside variables and then executed when necessary. This is exactly what happens when you are binding methods to events. You are storing the signature of the method inside a variable that will store multiple methods signature and call them when an event is happening.

Of course, it’s limiting to think about delegates only as events. If we check the standard definition for the word delegation):

In its original usage, delegation refers to one object relying upon another to provide a specified set of functionalities. […] Delegation is the simple yet powerful concept of handing a task over to another part of the program.

As I already demonstrated inside the StreamProxy class, we can easily give another software the tools  to answer it’s solution. But sometimes, the call might not be necessary. Just like when you are sending a data repository to a service class to save a model, delegate is basically just sending any method that match the accepted signature  instead of complete class.

One of the most recent use of Lambda’s inside C# is inside Mocking tools. Moq use those to easily describe expectations, returned values, and so on. This allow Moq to be type safe instead of relying on Reflection and string comparison. This has brought us compile time check rather than runtime check.

There is a lot of use for delegates and they are being used more and more. Lots of languages support some form of delegation (.NET, C++, JAVA, and many more)

Hope delegates are not as foreign as they were 1 year ago.

Do modern compilers optimize the x * 2 operation to x << 1?

I was wondering wether the C++ compiler inside Visual Studio 2008 was optimizing it the way it would logically be. So I asked the question on Stackoverflow. It was among my first questions and was to see how the community would answer (yeah I’m lazy).

I was promptly answered by Rob Walker. He showed me what the compiler outputted.

32bit:

1
2
3
4
5
6
7
8
9
10
11
12
13
01391000  push        ecx
int x = 0;

scanf("%d", &amp;x);
01391001 lea eax,[esp]
01391004 push eax
01391005 push offset string "%d" (13920F4h)
0139100A mov dword ptr [esp+8],0
01391012 call dword ptr [__imp__scanf (13920A4h)]

int y = x * 2;
01391018 mov ecx,dword ptr [esp+8]
0139101C lea edx,[ecx+ecx]

64bit:

1
2
3
4
5
6
int y = x * 2;
000000013FB9101E mov edx,dword ptr [x]

printf("%d", y);
000000013FB91022 lea rcx,[string "%d" (13FB921B0h)]
000000013FB91029 add edx,edx

While this gives us a lot of information about what it does, it’s normally a pretty bad idea to try to outsmart the compiler. The compiler have it’s own way to optimize some operations to make sure that it run best on the targeted platform.

To quote Raymond Chen:

Of course, the compiler is free to recognize this and rewrite your multiplication or shift operation. In fact, it is very likely to do this, because x+x is more easily pairable than a multiplication or shift. Your shift or multiply-by-two is probably going to be rewritten as something closer to an add eax, eax instruction.

The moral of the story is to write what you mean. If you want to divide by two, then write “/2”, not “>>1”

Bold was added by me for emphasis. Raymond Chen wrote that in 2005. Write what you mean. Wow. If you want to multiply then write “*2” and if you want to divide write “/“. If you are meaning to shift bits, then so! But don’t try to outsmart the compiler by doing bit shifting.

If you are doing bit shifting do gain performance, you are doing something wrong.  There is many other way to gain performance in the kind of application we, developers, are building and bit shifting is not one of them.

You are simply losing readability for maybe a few milliseconds.

Review and Summary of Greg Young's presentation

Greg had a presentation here in Montreal for the user group “.NET Montreal Community”. I didn’t know Greg before his presentation. He was talking about “Everything you wanted to know about architecture but were afraid to ask”. I must say that this presentation was really revealing to some point. Greg really made a point that customers are the architect’s priority and that the job of the architect is to bring IT and Business together.

His presentation was energetic, funny (man… you should see those slides!) and when you left… you learned something.

Here’s what I remember 24 hours after the presentation.

Context is everything.

Pushing a prototype directly to production might be a good idea if it means the company’s survival to get this product available. It all depend on… the context! If the application is a startup and no money is being made, the faster you have a product, the faster money starts to get in. In this scenario, you have to make some sacrifices. Instead of configuring a BizTalk installation, you might do a simple In-Code workflow that will help you get the first version out. It is not however a reason to keep the code crappy. This is a technical debt and will need to be paid back later. When the product/business starts making money, a proper BizTalk installation will have to be done and rewriting the workflow part will probably be time intensive.

Software is there to bring money or to save money

Which bring us to this. Money.

A software is not there to save reports to a database. It’s there to save money by automating the reports creation so that someone doesn’t have to spend 3 hours making a report (saving money here!).  It’s there to MAKE money in the case where you are actually building a product or offering a service to potential client. By keeping this in mind, we keep a better view of what the client really want.

When in a BBoM, build bubbles

If you are in a situation where all the code is crap, start by building a bubble around what you are currently implementing so that other system can’t break you. By bubble, we mean “clean code” that will do the work it is supposed to do, be testable, perform, etc..

Of course the bubble is going to be pierced and demolished with time but the longer people work on the system and the more bubble you build and in the end, you finish by having a nice encapsulated system with less mud. There will still be mud. It never will go totally away. But way less than if you would have thrown garbage code everywhere.

TTL (time to live) of a system is important

If the system is going to be rebuilt every 3 years, it’s worthless to build it like it’s going to last 25. A lot of money can be saved when you don’t have to make code that will last 25 years.

It doesn’t mean that QoS and other non-functional requirements are not necessary. It is as necessary as anywhere. It just means that there won’t be a 8 month planning to build the best architecture possible so that it can actually overcome ANY changes the business might have. By reducing the amount of architecture and choosing a simpler architecture that will be easier to maintain but less impervious to changes. Complexity to such a system will eventually increase up to a point where a rewrite is necessary. But here is the trick… a planned rewrite. Because of this, less time is spent on development and more time is spent on making money. Business loves that.

Architect must keep sight of the current context

Let’s take an example. You have been hired by a small retail business to build an e-Commerce web site. The company is a start-up and living off debt. Their website is scheduled to be release in a month and they are expecting this money to payoff their debt and increase the sales.

Of course, you could start by building a site that will suite their needs and will start making them money. But you can also build the Next-Generation-E-Commerce-Application-That-Will-Blow-Competitors-Away but that will take 2 years to complete.

The goal is to keep sight on what the client need. The client want to make money. It’s boring. It’s more fun to build something that is incredibly powerful and that will be a real technical pleasure to build. If you chose #2, you just lost sight of your context.

The client would need money but you are lost on building “the next big thing”.

Conclusion

Context is king for an architect. Craftsmanship must take the second place when making money (thus paying YOU) comes in the highest priority. It’s not however a reason to slack off and write crap code. The better our code the easier to maintain and the easier to add new functionality it is. There is just times that prototypes are going to be shipped in production because it’s essential to the business and that you are going to maintain it. The main goal is to be aware when we get a technical debt and to be able to reimburse it as soon as possible to keep the client happy.

The presentation Greg gave was very interesting and I really would love to see another presentation by Greg (DevTeach 2009? Montreal User Group?). If I got anything wrong in here, please tell me. I would love to discuss it with more people!

Re: Do We Need a New Internet?

That was the question asked by the New York Times in this article.

If you were in for the short answer and want to leave that quickly… here it is. No.

What I really liked about this article how that “growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over”.

Start over? What happen when you start over? Ask Netscape, dBase, and many other. Joel Spolsky have written this inside his blog in April 2000. Close to 9 years ago and I still agree with him. But those are private company that failed because they were in a competition with other companies. That can’t be true for the internet right? The current network is now spread to pretty much all countries in the world. The United States of America are still among the leaders and the ones bringing bright ideas/technology to improve what we already have. If America were to drop out of the Internet and start their own thing, 2 things would happen.

First the network would stay alive (how many countries can just start over?). And America would seriously lag behind… like Netscape, dBase, etc. Sound familiar? Yep. That’s it’s… starting from scratch.

What I really disliked in the article is that “[…] users would give up their anonymity and certain freedoms in return for safety”. Americans already did this and nobody is really more safe. I’ll refer to my good friend Ben to take over for this one: “He who sacrifices freedom for security deserves neither.“.

So… No. There won’t be any new “Internet”. We must focus on what we already have. Like Joel Spolsky already wrote in this April 2000 article, “It’s harder to read code than to write it”. Taken on an IT side, it would mean that it’s harder to understand how everything is working than to just start over and start from scratch. Starting from scratch is easy because you have a green field but just like the current network, it would become really fast as close to what it is today. If it’s faster, people will find bigger things to transfer. If it’s safer, people will find bug in the software somewhere and exploit them. If you have less freedom… that will always be gone.

So stop complaining. Fight for Net Neutrality but please stop believing that starting anew will solve all your problems. Most of the time… it only bring other problems that you haven’t seen before and nothing would have really changed.

Unit testing Internals member of a solution from another project

Here is a little bit of knowledge that lots of people are not aware of. There is an Attribute that is InternalsVisibleToAttribute that allows access to a specific external project (the unit test project).

This attribute, once inserted inside AssemblyInfo.cs, will grant “public” visibility to all internal members of the project to the project specified within the attribute.

Here is how it is shown on MSDN:

1
[assembly:InternalsVisibleTo("MyAssembly, PublicKey=32ab4ba45e0a69a1")]

It is however wrong and will never work. The main reason is that what is there let us believe that it’s the PublicKeyToken but it is in fact the PublicKey as clearly typed there.

So… how do we get this PublicKey? By executing the following command: sn -Tp MyAssembly.dll

The result is going to be something like the following:

1
2
3
4
5
6
7
Public key is
0024000004800000940000000602000000240000525341310004000001000100adfedd2329a0f8
3e057f0b14e47f02ec865e542c2dcca6349177fe3530edd5080276c48c6d02fa0a6f67738cc1a0
793be3322cf17b8995acc15055c00fa61b67a203c7eb2516922810ff0b17cd2e08492bdcafc4a9
23e6fff4caba672a4c2d0d0f5cac9aea95c3dce3717bb733d852c387f5f025c42c14ec8d759f7e
b13689be
Public key token is 96dfc321948ee54c

Here is the end result to make it properly visible:

1
2
3
4
5
6
[assembly: InternalsVisibleTo("AssemblyB, PublicKey="
+ "0024000004800000940000000602000000240000525341310004000001000100adfedd2329a0f8"
+ "3e057f0b14e47f02ec865e542c2dcca6349177fe3530edd5080276c48c6d02fa0a6f67738cc1a0"
+ "793be3322cf17b8995acc15055c00fa61b67a203c7eb2516922810ff0b17cd2e08492bdcafc4a9"
+ "23e6fff4caba672a4c2d0d0f5cac9aea95c3dce3717bb733d852c387f5f025c42c14ec8d759f7e"
+ "b13689be" )]

After this step is done, all reference to internal members are considered “public” for this specific project. This simple trick allows you to complete your tests and don’t gives any excuse not to test.

Problem deploying a solution package on a SharePoint 2007 farm?

You have a WSP and you are trying to deploy it inside the farm and it doesn’t work. You hate it. You try to look in the Event Viewer, SharePoint logs, etc.

Before you event start looking around everywhere, be sure to know that when you are adding/deploying a solution package you need some specify right.

Here is a small checklist:

  • DBO access to the Configuration database (normally end with “_Config”)
  • Farm administrator of the SharePoint site

But you are probably why it was working on the development machine and not in production?

Why am I not DBO?

Normally when a development machine is created, the developers are Administrators on the machine. Most SharePoint/WSS installations are made on a SQL Server 2005 installation. When installing SharePoint on a SQL Server 2005, all administrators of the machine where the database is installed are DBO. However, in a production environment, developers are given more restricted access and often doesn’t have DBO access on the database.

Why am I not Farm administrator?

As for the Farm administrator situation, the user that configure SharePoint on the development machine is automatically given Farm Administrators right which is not the case in Production environment.

Conclusion

It is really important to know which minimum permissions are required to do certain task inside SharePoint 2007. This is a specific case where only trusted users are allowed to make such system wide changes. SharePoint 2007 is configured to be “secured by default” and restricted to disallow any unauthorized user to make changes that could compromise the farm.

Enjoy your SharePoints deployments

Easily enable databindings on a ToolStripButton

I was developping an application lately and I needed to bind the “Enabled” property of a ToolStripButton to my Presenter. I failed to find any “DataSource” or “DataBindings” property. I then decided to make my own button without reinventing the wheel to enable this capability.

Here’s this simple class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class ToolStripBindableButton : ToolStripButton, IBindableComponent
{
private ControlBindingsCollection dataBindings;

private BindingContext bindingContext;

public ControlBindingsCollection DataBindings
{
get
{
if(dataBindings == null) dataBindings = new ControlBindingsCollection(this);
return dataBindings;
}
}

public BindingContext BindingContext
{
get
{
if(bindingContext == null) bindingContext = new BindingContext();
return bindingContext;
}
set { bindingContext = value; }
}
}

Once you include this simple class inside your project/solution… you can easily convert any ToolStripButton into our new ToolStripBindableButton.

And I solved my problem like this:

1
myBindableButton.DataBindings.Add("Enabled", myPresenter, "CanDoSomething");

Part 2 - Basic of mocking with Moq

See also:  Part 1 - Part 3

As every mocking framework, except TypeMock which can perform differently, every mocked class can’t be sealed and methods that need to be mocked need to be public. If  the class is not inheriting from an interface, the method that are being mocked need to be virtual.

Once this is cleared… let’s show a simple example of a Product having it’s price calculated with a Tax Calculator.

Here’s what we are starting with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class Product
{
public int ID { get; set; }
public String Name { get; set; }
public decimal RawPrice { get; set; }
public decimal GetPriceWithTax(ITaxCalculator calculator)
{

return calculator.GetTax(RawPrice) + RawPrice;
}
}

public interface ITaxCalculator
{
decimal GetTax(decimal rawPrice);
}

The method we want to test here is Product.GetPriceWithTax(ITaxCalculator). At the same time, we don’t want to instantiate a real tax calculator which gets it’s data from a configuration or a database. Unit tests should never depend upon your application’s configuration or a database. By “application’s configuration”, I mean “App.config” or “web.config” which are often changed during the life of an application and might inadvertently fail your tests.

So, we are going to simply mock our tax calculator like this:

1
2
3
4
5
6
7
8
//Initialize our product
Product myProduct = new Product {ID = 1, Name = "Simple Product", RawPrice = 25.0M};

//Create a mock with Moq
Mock<ITaxCalculator> fakeTaxCalculator = new Mock<ITaxCalculator>();

// make sure to return 5$ of tax for a 25$ product
fakeTaxCalculator.Expect(tax => tax.GetTax(25.0M)).Returns(5.0M);

Now It all depends on what you want to  test. Depending if you are a “State” (Classic) or “Behaviour verification” (Mockist), you will want to test different things. If you don’t know the difference, don’t bother now but you might want to look at this article by Martin Fowler.

So if we want to make sure that “GetTax” from our interface was called:

1
2
3
4
5
// Retrived the calculated tax
decimal calculatedTax = myProduct.GetPriceWithTax(fakeTaxCalculator.Object);

// Verify that the "GetTax" method was called from the interface
fakeTaxCalculator.Verify(tax => tax.GetTax(25.0M));

If you want to make sure that the calculated price equal your product price with your tax added (which confirm that the taxes were calculated):

1
2
3
4
5
// Retrieved the calculated tax
decimal calculatedTax = myProduct.GetPriceWithTax(fakeTaxCalculator.Object);

// Make sure that the taxes were calculated
Assert.AreEqual(calculatedTax, 30.0M);

What’s the difference? The first example verify the behaviour by making sure that “GetTax” was called. It doesn’t care about the value returned. It could return 100$ and it would care. All that mattered in this example was that GetTax was called. Once this is done, we can assume that the expected behaviour was confirmed.

The second example is a state verification. We throw 25$ inside the tax calculator and we expect the tax calculator to return 5$ for a total price of 30$. It wouldn’t call GetTax and it wouldn’t care. As long as the proper value is returned, it’s valid.

Some people will argue that behaviour is better than state (or vice versa). Personally, I’m a fan of both. A good example is that I might want to verify that an invalid invoice will not be persisted to the database and a behaviour verification approach is perfect for this case. But if I’m verifying (like in this case) that the tax were properly calculated, state behaviour is more often than not quicker and more easier to understand.

Nothing prevent your from doing both and making sure that everything works. I’m still not a full fledged TDD developer but I’m trying as much as possible to make tests for my classes as often as possible.

If you found this article helpful, please leave a comment! They will be mostly helpful for my presentation on February 25th 2009 at www.dotnetmontreal.com.

Top tips to increase your productivity

In fact… there is only one.

Stop browsing. Start working.

Nothing makes better sense than that. We are constantly surrounded by information from every kind and from all kinds of source. May they be “normal news” or Microsoft announcements or even nice piece of code on Dzone or Dotnetkicks or spending “a few minutes” on reddit, the most important thing is… stop it.

Keep those for home browsing.

You should see your productivity sky rocket immediately.

Don’t wait until someone tells you.

Part 1 - Introduction to Moq

See also:  Part 2 - Part 3

This is the first post of a serie on mocking with Moq. I’ll be giving a conference a .NET Montreal Community on February 25th and I though there it would be good reference to anyone attending the @Lunch event.

What is Moq?

Moq is a mocking framework like RhynoMock and TypeMock jointly developed by Clarius, Manas and InSTEDD. It heavily use Lambda to create expectations and returning results. It’s been highly criticized as not making any distinctions between Mocks and Stubs.

What is import to remember, is that unless you are philosophically attached to your testing style… most developer don’t make any different between them and rather do behaviour testing.

Moq easily allows you to change it’s behaviour from “Strict” to “Loose” (Loose being the default). Strict behaviour won’t allow any calls on the mocked object unless it’s been previously marked as expected. Loose will allow all calls and return a default value for it.

There is a lot off more advanced behaviours that can be configured and used.

Why another mocking framework?

Daniel Cazzulino (A.k.a Kzu) blogged a lot about Moq and even compared why he helped in creating Moq. Moq was created to ease the learning curve of learning a mocking framework while blurring the distinctions between mocks and stubs.

Moq allow you to quickly get into mocking (a good thing) while allowing more complex scenarios by more purist mockists. It’s the perfect mocking framework if you have never touched a mocking framework and is your first experience.

Where do I download it?

You can download Moq directly here. At the moment of writing this post, Moq was at version 2.6.1014 and Moq 3.0 was available as a beta.

How to install it?

Once Moq is downloaded and extracted from it’s zip file, you can easily add the single DLL (Moq.dll) inside your project install it inside the GAC if you are going to use it on many projects.

Stay up to date

After this brief introduction, I’ll show more advanced feature of Moq with code sample on how to use them and why to use them.

Code Snippet - Quickly changing the host in an URL in C#

Quick code snippet today. Let’s say you have some URLs that you want to change the hostname/port on it without having to do string manipulation.

1
2
3
4
5
6
7
//multiple constructor are available
UriBuilder builder = new UriBuilder("http://localhost:1712")
{
Host = Environment.MachineName,
Port = 80
};
Uri myNewUri = builder.Uri;

There you go! It’s as easy as this. This will avoid countless hour of parsing a URL for the hostname, port number or whatever else you are searching.

The bonus with that is that the path of the URL won’t be touched at all.

Have fun!

Stop using those stupid Model example

Stop using Circle/Square/Rectangle, People/Employee, Car/Model  examples for models or example on how to use Object-Oriented Principles or any example at all.

There is plenty of “Open” model that you can use. Here’s a simple list for those who needs inspiration:

  • Blog (Posts, Comments, Authors, etc…)
  • E-Commerce (Invoice, Order, Customer, Warehouse, Inventory, etc…)
  • Auction (Auction, Seller, Buyer, Reputation, etc….)
  • Bank (Account, Transactions, Customer, etc…)
  • News site (Article, User, Approver, etc…)
  • And so many more

Unless you are explaining what is OOP to total beginner that never did any of this, you should use more advanced model to explain practices, design patterns or anything else. Otherwise, we’ll keep on babbling on stupid model how a Square is a Rectangle and so on.

The time has come to stop using 1st grade model to explain advanced concept. Most people should be able to easily pick one of the model I’ve shown above and display one element of the model to easily make it available to everyone.

Who’s with me?

Build fail only on the build server with delay-signed assemblies

1
[Any CPU/Release] SGEN(0,0): error : Could not load file or assembly '<YOUR ASSEMBLY>' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)

I got that error from our build machine. It crashed wonderfully and would tell me that an assembly could not be loaded.

After 1 hour of search I finally found the problem.

We are in an environment where we delay-sign all of our assemblies and we are fully signing them on the build server as an “AfterDrop” event. Of course, we add a “Skip Verification” for the public key token we are using so that we can put them inside the GAC.

All of our projects (more than 20) we built exactly that way and I was seeing no way why it would happen. I then decided to look at what depended on this assembly. 1 project and it was the one that was failing inside the “Build Status”.

I then found something that was used inside that specific project that was never used anywhere else. Somebody used the tab “Settings” to store application settings. Not a bad choice and a perfectly sound decision but… how can this make the build crash?

Well… it seems like using “Settings” force a call to SGEN.exe and SGEN.exe won’t take any partially signed assemblies. It’s then that I figured out that our build server didn’t had any of those “Skip Verification”.

After searching Google for some different terms, I found out under the “Build” tab a way to deactivate the call to SGEN. It’s called “Generate Serialization Assembly”. By default, the value is “Auto”. After settings the value at “Off” for “Release” mode only, the build was fixed and we were happy campers.

Code Snippet: Filtering a list using Lambda without a loop

This is not the latest news of the day… but if you are doing filtering with loops, you are doing it wrong.

Here’s a sample code that will replace a loop easily:

1
2
List<String> myList = new List<string> {"Maxim", "Joe", "Obama"};
myList = myList.Where(item => item.Equals("Obama")).ToList();

While this might sound too easy, it’s easily done and works fine. Just make sure that the objects that you are using inside this kind of filtering are not ActiveRecord DAL object or you might be sorry for the performance.

WSS/SharePoint Installation - Stand-Alone vs Web Front-End

Small post about that today. I ended up losing a good part of my day reinstalling a SharePoint installation on a VMWare machine. Why?

Because at first I installed it in Stand-Alone mode which sounded like a good idea at first. What does that do is that it creates a SQL Server Express database and won’t allow anyone connections on it. Adding insult to injury, it also doesn’t work with all the normal components of a SharePoint installation.

So… after reverting back to a working snapshot… redoing all my Windows Update plus reinstalling SharePoint… I end up with a working installation.

So… as a small reminder… don’t forget to specify “Web Front-End” when first prompted by the installation. It will save you a lot of time

Cross-Cutting Concerns should be handled on ALL projects. No Excuses

The title say it all. All cross-cutting concerns in a project should be handled or given some thought on ALL PROJECTS. No exceptions. No excuses.

Before we go further, what is a cross-cutting concern? Here is the definition from Wikipedia:

In computer science, cross-cutting concerns are aspects of a program which affect (crosscut) other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and result in either scattering or tangling of the program, or both.

The perfect example for this is error handling. The error handling is not part of the main model of an application but is required for developers to catch errors and log/display them. Logging is also a cross-cutting concern.

So let’s display the 3 most important concerns:

  • Exception Management
  • Logging/Instrumentation
  • Caching
Exception Management

This is the most important. It seems like a basic really. Wrapping code in try {…} catch{…} and make sure everything works is the most basic thing to do. I’ve seen projects without it. Honestly… it’s bad. Really bad. Everything was working fine but when something went wrong, nothing could handle it properly.

Adding exception handling to all and every method in an application is not a reasonable thing to do either.

Here is a small checklist for handling exceptions:

  1. Don’t catch an exception if there is no relevant information that can be added.
  2. Don’t swallow (empty catch) if there is not a good reason to.
  3. Make sure that exception are managed at the contact point between layers so relevant information can be added

Worst of all… you couldn’t know wether it was coming from the Presentation, Business or Data layer which leads to horrible debugging.

Which brings us to the next section….

Logging / Instrumentation

When an exception thrown, you want to know why. Logging allows you to log everything at a specific location based on the project you are on (database, flat file, WMI, Event Log, etc.). Most people already log a lot of stuff in the code but… is what’s logged relevant?

Logging is only important when it’s meaningful. Bloating your software with logging won’t bring any good. Too much log and nobody will inspect the log for things that went wrong. Too few and the software could generate errors for too long before anybody realize.

I won’t go in too much detail but if you want to know code to logging ratio and the problem with logging there is many information out there.

Caching

Caching is too often considered the root of all evil. However, it is evil only if the code become unreadable and it takes a developer 4 hours to get a 1% gain.

I coded a part of an application that generated XML for consumption by a Flash application. I didn’t had any specification but I knew that if let that uncached, I would have a bug report the next day on my desk. The caching that I added helped give all flash application responsiveness while keeping the server load under control.

Caching is too often pushed back to a later time and should be considered every time a service or any dynamically generated content is served to the public. The responsiveness will grow while keeping the amount of code to what is necessary.

If you need more argument, please take a look at ASP.NET Micro Caching: Benefits of a One-Second Cache.

How do I do it?

If you haven’t been present for the last 10 years or so, I would suggest taking a look at Enterprise Library. It’s a great tool that allows you to handle all those cross-cutting concerns without having to build your own tools.

If you don’t want to use Enterprise Library, there is plenty of other framework that will allow to handle those concerns.

Just remember that good coder code, great coder reuse.

Creating a StreamProxy with Delegate/Lambda to read/write to a file

I saw a question the last time on Stackoverflow. The accepted answer was from Jon Skeet which was:

  • Event handlers (for GUI and more)
  • Starting threads
  • Callbacks (e.g. for async APIs)
  • LINQ and similar (List.Find etc)
  • Anywhere else where I want to effectively apply “template” code with some specialized logic inside (where the delegate provides the specialization)

I was once asked “What is the use of a delegate?”. The main answer that I found was “to delay the call”. Most people see delegate as events most of the time. However, they can be put to much greater use. Here is an example that I’ll gladly share with you all:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class StreamProxy<T> where T : Stream
{
private Func<T> constructorFunction;

private StreamProxy(Func<T> constructor)
{
constructorFunction = constructor;
}

public void Write(Action<StreamWriter> func)
{
using(T stream = constructorFunction())
{
StreamWriter streamWriter = new StreamWriter(stream);
func(streamWriter);
streamWriter.Flush();
}
}

public String Read(Func<StreamReader, String> func)
{
using (T stream = constructorFunction())
{
string result = func(new StreamReader(stream));
return result;
}
}

public static StreamProxy<T> Create(Func<T> func)
{
return new StreamProxy<T>(func);
}
}

To summarize what it does… it accept a delegate that returns a class that derives from “Stream” that will be used as a Constructor. It will return you a StreamProxy object that you can then use to read or write String out of that stream. What is interesting is that when it’s first created… nothing is done on the file. You are just giving instruction to the class on how to access it. When you then read/write from the file, the class knows how to manage a stream and make sure that no locks are left on the files.

Here is a sample usage of that class:

1
2
3
4
5
6
7
8
// Here I use a FileStream but it can also be a MemoryStream or anything that derives from Stream
StreamProxy<FileStream> proxy = StreamProxy<FileStream>.Create(() => new FileStream(@"C:\MyTest.txt", FileMode.OpenOrCreate));

// Writing to a file
proxy.Write(stream => stream.WriteLine("I am using the Stream Proxy!"));

// Reading to a file
string contentOfFile = proxy.Read(stream => stream.ReadToEnd());

That’s all folks! As long as you can give a Stream to this proxy, you won’t need to do any “using” in your code and everything will stay clean!

See you all next time!