Review and Summary of Greg Young's presentation

Greg had a presentation here in Montreal for the user group “.NET Montreal Community”. I didn’t know Greg before his presentation. He was talking about “Everything you wanted to know about architecture but were afraid to ask”. I must say that this presentation was really revealing to some point. Greg really made a point that customers are the architect’s priority and that the job of the architect is to bring IT and Business together.

His presentation was energetic, funny (man… you should see those slides!) and when you left… you learned something.

Here’s what I remember 24 hours after the presentation.

Context is everything.

Pushing a prototype directly to production might be a good idea if it means the company’s survival to get this product available. It all depend on… the context! If the application is a startup and no money is being made, the faster you have a product, the faster money starts to get in. In this scenario, you have to make some sacrifices. Instead of configuring a BizTalk installation, you might do a simple In-Code workflow that will help you get the first version out. It is not however a reason to keep the code crappy. This is a technical debt and will need to be paid back later. When the product/business starts making money, a proper BizTalk installation will have to be done and rewriting the workflow part will probably be time intensive.

Software is there to bring money or to save money

Which bring us to this. Money.

A software is not there to save reports to a database. It’s there to save money by automating the reports creation so that someone doesn’t have to spend 3 hours making a report (saving money here!).  It’s there to MAKE money in the case where you are actually building a product or offering a service to potential client. By keeping this in mind, we keep a better view of what the client really want.

When in a BBoM, build bubbles

If you are in a situation where all the code is crap, start by building a bubble around what you are currently implementing so that other system can’t break you. By bubble, we mean “clean code” that will do the work it is supposed to do, be testable, perform, etc..

Of course the bubble is going to be pierced and demolished with time but the longer people work on the system and the more bubble you build and in the end, you finish by having a nice encapsulated system with less mud. There will still be mud. It never will go totally away. But way less than if you would have thrown garbage code everywhere.

TTL (time to live) of a system is important

If the system is going to be rebuilt every 3 years, it’s worthless to build it like it’s going to last 25. A lot of money can be saved when you don’t have to make code that will last 25 years.

It doesn’t mean that QoS and other non-functional requirements are not necessary. It is as necessary as anywhere. It just means that there won’t be a 8 month planning to build the best architecture possible so that it can actually overcome ANY changes the business might have. By reducing the amount of architecture and choosing a simpler architecture that will be easier to maintain but less impervious to changes. Complexity to such a system will eventually increase up to a point where a rewrite is necessary. But here is the trick… a planned rewrite. Because of this, less time is spent on development and more time is spent on making money. Business loves that.

Architect must keep sight of the current context

Let’s take an example. You have been hired by a small retail business to build an e-Commerce web site. The company is a start-up and living off debt. Their website is scheduled to be release in a month and they are expecting this money to payoff their debt and increase the sales.

Of course, you could start by building a site that will suite their needs and will start making them money. But you can also build the Next-Generation-E-Commerce-Application-That-Will-Blow-Competitors-Away but that will take 2 years to complete.

The goal is to keep sight on what the client need. The client want to make money. It’s boring. It’s more fun to build something that is incredibly powerful and that will be a real technical pleasure to build. If you chose #2, you just lost sight of your context.

The client would need money but you are lost on building “the next big thing”.

Conclusion

Context is king for an architect. Craftsmanship must take the second place when making money (thus paying YOU) comes in the highest priority. It’s not however a reason to slack off and write crap code. The better our code the easier to maintain and the easier to add new functionality it is. There is just times that prototypes are going to be shipped in production because it’s essential to the business and that you are going to maintain it. The main goal is to be aware when we get a technical debt and to be able to reimburse it as soon as possible to keep the client happy.

The presentation Greg gave was very interesting and I really would love to see another presentation by Greg (DevTeach 2009? Montreal User Group?). If I got anything wrong in here, please tell me. I would love to discuss it with more people!

Re: Do We Need a New Internet?

That was the question asked by the New York Times in this article.

If you were in for the short answer and want to leave that quickly… here it is. No.

What I really liked about this article how that “growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over”.

Start over? What happen when you start over? Ask Netscape, dBase, and many other. Joel Spolsky have written this inside his blog in April 2000. Close to 9 years ago and I still agree with him. But those are private company that failed because they were in a competition with other companies. That can’t be true for the internet right? The current network is now spread to pretty much all countries in the world. The United States of America are still among the leaders and the ones bringing bright ideas/technology to improve what we already have. If America were to drop out of the Internet and start their own thing, 2 things would happen.

First the network would stay alive (how many countries can just start over?). And America would seriously lag behind… like Netscape, dBase, etc. Sound familiar? Yep. That’s it’s… starting from scratch.

What I really disliked in the article is that “[…] users would give up their anonymity and certain freedoms in return for safety”. Americans already did this and nobody is really more safe. I’ll refer to my good friend Ben to take over for this one: “He who sacrifices freedom for security deserves neither.“.

So… No. There won’t be any new “Internet”. We must focus on what we already have. Like Joel Spolsky already wrote in this April 2000 article, “It’s harder to read code than to write it”. Taken on an IT side, it would mean that it’s harder to understand how everything is working than to just start over and start from scratch. Starting from scratch is easy because you have a green field but just like the current network, it would become really fast as close to what it is today. If it’s faster, people will find bigger things to transfer. If it’s safer, people will find bug in the software somewhere and exploit them. If you have less freedom… that will always be gone.

So stop complaining. Fight for Net Neutrality but please stop believing that starting anew will solve all your problems. Most of the time… it only bring other problems that you haven’t seen before and nothing would have really changed.

Unit testing Internals member of a solution from another project

Here is a little bit of knowledge that lots of people are not aware of. There is an Attribute that is InternalsVisibleToAttribute that allows access to a specific external project (the unit test project).

This attribute, once inserted inside AssemblyInfo.cs, will grant “public” visibility to all internal members of the project to the project specified within the attribute.

Here is how it is shown on MSDN:

1
[assembly:InternalsVisibleTo("MyAssembly, PublicKey=32ab4ba45e0a69a1")]

It is however wrong and will never work. The main reason is that what is there let us believe that it’s the PublicKeyToken but it is in fact the PublicKey as clearly typed there.

So… how do we get this PublicKey? By executing the following command: sn -Tp MyAssembly.dll

The result is going to be something like the following:

1
2
3
4
5
6
7
Public key is
0024000004800000940000000602000000240000525341310004000001000100adfedd2329a0f8
3e057f0b14e47f02ec865e542c2dcca6349177fe3530edd5080276c48c6d02fa0a6f67738cc1a0
793be3322cf17b8995acc15055c00fa61b67a203c7eb2516922810ff0b17cd2e08492bdcafc4a9
23e6fff4caba672a4c2d0d0f5cac9aea95c3dce3717bb733d852c387f5f025c42c14ec8d759f7e
b13689be
Public key token is 96dfc321948ee54c

Here is the end result to make it properly visible:

1
2
3
4
5
6
[assembly: InternalsVisibleTo("AssemblyB, PublicKey="
+ "0024000004800000940000000602000000240000525341310004000001000100adfedd2329a0f8"
+ "3e057f0b14e47f02ec865e542c2dcca6349177fe3530edd5080276c48c6d02fa0a6f67738cc1a0"
+ "793be3322cf17b8995acc15055c00fa61b67a203c7eb2516922810ff0b17cd2e08492bdcafc4a9"
+ "23e6fff4caba672a4c2d0d0f5cac9aea95c3dce3717bb733d852c387f5f025c42c14ec8d759f7e"
+ "b13689be" )]

After this step is done, all reference to internal members are considered “public” for this specific project. This simple trick allows you to complete your tests and don’t gives any excuse not to test.

Problem deploying a solution package on a SharePoint 2007 farm?

You have a WSP and you are trying to deploy it inside the farm and it doesn’t work. You hate it. You try to look in the Event Viewer, SharePoint logs, etc.

Before you event start looking around everywhere, be sure to know that when you are adding/deploying a solution package you need some specify right.

Here is a small checklist:

  • DBO access to the Configuration database (normally end with “_Config”)
  • Farm administrator of the SharePoint site

But you are probably why it was working on the development machine and not in production?

Why am I not DBO?

Normally when a development machine is created, the developers are Administrators on the machine. Most SharePoint/WSS installations are made on a SQL Server 2005 installation. When installing SharePoint on a SQL Server 2005, all administrators of the machine where the database is installed are DBO. However, in a production environment, developers are given more restricted access and often doesn’t have DBO access on the database.

Why am I not Farm administrator?

As for the Farm administrator situation, the user that configure SharePoint on the development machine is automatically given Farm Administrators right which is not the case in Production environment.

Conclusion

It is really important to know which minimum permissions are required to do certain task inside SharePoint 2007. This is a specific case where only trusted users are allowed to make such system wide changes. SharePoint 2007 is configured to be “secured by default” and restricted to disallow any unauthorized user to make changes that could compromise the farm.

Enjoy your SharePoints deployments

Easily enable databindings on a ToolStripButton

I was developping an application lately and I needed to bind the “Enabled” property of a ToolStripButton to my Presenter. I failed to find any “DataSource” or “DataBindings” property. I then decided to make my own button without reinventing the wheel to enable this capability.

Here’s this simple class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class ToolStripBindableButton : ToolStripButton, IBindableComponent
{
private ControlBindingsCollection dataBindings;

private BindingContext bindingContext;

public ControlBindingsCollection DataBindings
{
get
{
if(dataBindings == null) dataBindings = new ControlBindingsCollection(this);
return dataBindings;
}
}

public BindingContext BindingContext
{
get
{
if(bindingContext == null) bindingContext = new BindingContext();
return bindingContext;
}
set { bindingContext = value; }
}
}

Once you include this simple class inside your project/solution… you can easily convert any ToolStripButton into our new ToolStripBindableButton.

And I solved my problem like this:

1
myBindableButton.DataBindings.Add("Enabled", myPresenter, "CanDoSomething");

Part 2 - Basic of mocking with Moq

See also:  Part 1 - Part 3

As every mocking framework, except TypeMock which can perform differently, every mocked class can’t be sealed and methods that need to be mocked need to be public. If  the class is not inheriting from an interface, the method that are being mocked need to be virtual.

Once this is cleared… let’s show a simple example of a Product having it’s price calculated with a Tax Calculator.

Here’s what we are starting with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class Product
{
public int ID { get; set; }
public String Name { get; set; }
public decimal RawPrice { get; set; }
public decimal GetPriceWithTax(ITaxCalculator calculator)
{

return calculator.GetTax(RawPrice) + RawPrice;
}
}

public interface ITaxCalculator
{
decimal GetTax(decimal rawPrice);
}

The method we want to test here is Product.GetPriceWithTax(ITaxCalculator). At the same time, we don’t want to instantiate a real tax calculator which gets it’s data from a configuration or a database. Unit tests should never depend upon your application’s configuration or a database. By “application’s configuration”, I mean “App.config” or “web.config” which are often changed during the life of an application and might inadvertently fail your tests.

So, we are going to simply mock our tax calculator like this:

1
2
3
4
5
6
7
8
//Initialize our product
Product myProduct = new Product {ID = 1, Name = "Simple Product", RawPrice = 25.0M};

//Create a mock with Moq
Mock<ITaxCalculator> fakeTaxCalculator = new Mock<ITaxCalculator>();

// make sure to return 5$ of tax for a 25$ product
fakeTaxCalculator.Expect(tax => tax.GetTax(25.0M)).Returns(5.0M);

Now It all depends on what you want to  test. Depending if you are a “State” (Classic) or “Behaviour verification” (Mockist), you will want to test different things. If you don’t know the difference, don’t bother now but you might want to look at this article by Martin Fowler.

So if we want to make sure that “GetTax” from our interface was called:

1
2
3
4
5
// Retrived the calculated tax
decimal calculatedTax = myProduct.GetPriceWithTax(fakeTaxCalculator.Object);

// Verify that the "GetTax" method was called from the interface
fakeTaxCalculator.Verify(tax => tax.GetTax(25.0M));

If you want to make sure that the calculated price equal your product price with your tax added (which confirm that the taxes were calculated):

1
2
3
4
5
// Retrieved the calculated tax
decimal calculatedTax = myProduct.GetPriceWithTax(fakeTaxCalculator.Object);

// Make sure that the taxes were calculated
Assert.AreEqual(calculatedTax, 30.0M);

What’s the difference? The first example verify the behaviour by making sure that “GetTax” was called. It doesn’t care about the value returned. It could return 100$ and it would care. All that mattered in this example was that GetTax was called. Once this is done, we can assume that the expected behaviour was confirmed.

The second example is a state verification. We throw 25$ inside the tax calculator and we expect the tax calculator to return 5$ for a total price of 30$. It wouldn’t call GetTax and it wouldn’t care. As long as the proper value is returned, it’s valid.

Some people will argue that behaviour is better than state (or vice versa). Personally, I’m a fan of both. A good example is that I might want to verify that an invalid invoice will not be persisted to the database and a behaviour verification approach is perfect for this case. But if I’m verifying (like in this case) that the tax were properly calculated, state behaviour is more often than not quicker and more easier to understand.

Nothing prevent your from doing both and making sure that everything works. I’m still not a full fledged TDD developer but I’m trying as much as possible to make tests for my classes as often as possible.

If you found this article helpful, please leave a comment! They will be mostly helpful for my presentation on February 25th 2009 at www.dotnetmontreal.com.

Top tips to increase your productivity

In fact… there is only one.

Stop browsing. Start working.

Nothing makes better sense than that. We are constantly surrounded by information from every kind and from all kinds of source. May they be “normal news” or Microsoft announcements or even nice piece of code on Dzone or Dotnetkicks or spending “a few minutes” on reddit, the most important thing is… stop it.

Keep those for home browsing.

You should see your productivity sky rocket immediately.

Don’t wait until someone tells you.

Part 1 - Introduction to Moq

See also:  Part 2 - Part 3

This is the first post of a serie on mocking with Moq. I’ll be giving a conference a .NET Montreal Community on February 25th and I though there it would be good reference to anyone attending the @Lunch event.

What is Moq?

Moq is a mocking framework like RhynoMock and TypeMock jointly developed by Clarius, Manas and InSTEDD. It heavily use Lambda to create expectations and returning results. It’s been highly criticized as not making any distinctions between Mocks and Stubs.

What is import to remember, is that unless you are philosophically attached to your testing style… most developer don’t make any different between them and rather do behaviour testing.

Moq easily allows you to change it’s behaviour from “Strict” to “Loose” (Loose being the default). Strict behaviour won’t allow any calls on the mocked object unless it’s been previously marked as expected. Loose will allow all calls and return a default value for it.

There is a lot off more advanced behaviours that can be configured and used.

Why another mocking framework?

Daniel Cazzulino (A.k.a Kzu) blogged a lot about Moq and even compared why he helped in creating Moq. Moq was created to ease the learning curve of learning a mocking framework while blurring the distinctions between mocks and stubs.

Moq allow you to quickly get into mocking (a good thing) while allowing more complex scenarios by more purist mockists. It’s the perfect mocking framework if you have never touched a mocking framework and is your first experience.

Where do I download it?

You can download Moq directly here. At the moment of writing this post, Moq was at version 2.6.1014 and Moq 3.0 was available as a beta.

How to install it?

Once Moq is downloaded and extracted from it’s zip file, you can easily add the single DLL (Moq.dll) inside your project install it inside the GAC if you are going to use it on many projects.

Stay up to date

After this brief introduction, I’ll show more advanced feature of Moq with code sample on how to use them and why to use them.

Code Snippet - Quickly changing the host in an URL in C#

Quick code snippet today. Let’s say you have some URLs that you want to change the hostname/port on it without having to do string manipulation.

1
2
3
4
5
6
7
//multiple constructor are available
UriBuilder builder = new UriBuilder("http://localhost:1712")
{
Host = Environment.MachineName,
Port = 80
};
Uri myNewUri = builder.Uri;

There you go! It’s as easy as this. This will avoid countless hour of parsing a URL for the hostname, port number or whatever else you are searching.

The bonus with that is that the path of the URL won’t be touched at all.

Have fun!

Stop using those stupid Model example

Stop using Circle/Square/Rectangle, People/Employee, Car/Model  examples for models or example on how to use Object-Oriented Principles or any example at all.

There is plenty of “Open” model that you can use. Here’s a simple list for those who needs inspiration:

  • Blog (Posts, Comments, Authors, etc…)
  • E-Commerce (Invoice, Order, Customer, Warehouse, Inventory, etc…)
  • Auction (Auction, Seller, Buyer, Reputation, etc….)
  • Bank (Account, Transactions, Customer, etc…)
  • News site (Article, User, Approver, etc…)
  • And so many more

Unless you are explaining what is OOP to total beginner that never did any of this, you should use more advanced model to explain practices, design patterns or anything else. Otherwise, we’ll keep on babbling on stupid model how a Square is a Rectangle and so on.

The time has come to stop using 1st grade model to explain advanced concept. Most people should be able to easily pick one of the model I’ve shown above and display one element of the model to easily make it available to everyone.

Who’s with me?

Build fail only on the build server with delay-signed assemblies

1
[Any CPU/Release] SGEN(0,0): error : Could not load file or assembly '<YOUR ASSEMBLY>' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)

I got that error from our build machine. It crashed wonderfully and would tell me that an assembly could not be loaded.

After 1 hour of search I finally found the problem.

We are in an environment where we delay-sign all of our assemblies and we are fully signing them on the build server as an “AfterDrop” event. Of course, we add a “Skip Verification” for the public key token we are using so that we can put them inside the GAC.

All of our projects (more than 20) we built exactly that way and I was seeing no way why it would happen. I then decided to look at what depended on this assembly. 1 project and it was the one that was failing inside the “Build Status”.

I then found something that was used inside that specific project that was never used anywhere else. Somebody used the tab “Settings” to store application settings. Not a bad choice and a perfectly sound decision but… how can this make the build crash?

Well… it seems like using “Settings” force a call to SGEN.exe and SGEN.exe won’t take any partially signed assemblies. It’s then that I figured out that our build server didn’t had any of those “Skip Verification”.

After searching Google for some different terms, I found out under the “Build” tab a way to deactivate the call to SGEN. It’s called “Generate Serialization Assembly”. By default, the value is “Auto”. After settings the value at “Off” for “Release” mode only, the build was fixed and we were happy campers.

Code Snippet: Filtering a list using Lambda without a loop

This is not the latest news of the day… but if you are doing filtering with loops, you are doing it wrong.

Here’s a sample code that will replace a loop easily:

1
2
List<String> myList = new List<string> {"Maxim", "Joe", "Obama"};
myList = myList.Where(item => item.Equals("Obama")).ToList();

While this might sound too easy, it’s easily done and works fine. Just make sure that the objects that you are using inside this kind of filtering are not ActiveRecord DAL object or you might be sorry for the performance.

WSS/SharePoint Installation - Stand-Alone vs Web Front-End

Small post about that today. I ended up losing a good part of my day reinstalling a SharePoint installation on a VMWare machine. Why?

Because at first I installed it in Stand-Alone mode which sounded like a good idea at first. What does that do is that it creates a SQL Server Express database and won’t allow anyone connections on it. Adding insult to injury, it also doesn’t work with all the normal components of a SharePoint installation.

So… after reverting back to a working snapshot… redoing all my Windows Update plus reinstalling SharePoint… I end up with a working installation.

So… as a small reminder… don’t forget to specify “Web Front-End” when first prompted by the installation. It will save you a lot of time

Cross-Cutting Concerns should be handled on ALL projects. No Excuses

The title say it all. All cross-cutting concerns in a project should be handled or given some thought on ALL PROJECTS. No exceptions. No excuses.

Before we go further, what is a cross-cutting concern? Here is the definition from Wikipedia:

In computer science, cross-cutting concerns are aspects of a program which affect (crosscut) other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and result in either scattering or tangling of the program, or both.

The perfect example for this is error handling. The error handling is not part of the main model of an application but is required for developers to catch errors and log/display them. Logging is also a cross-cutting concern.

So let’s display the 3 most important concerns:

  • Exception Management
  • Logging/Instrumentation
  • Caching
Exception Management

This is the most important. It seems like a basic really. Wrapping code in try {…} catch{…} and make sure everything works is the most basic thing to do. I’ve seen projects without it. Honestly… it’s bad. Really bad. Everything was working fine but when something went wrong, nothing could handle it properly.

Adding exception handling to all and every method in an application is not a reasonable thing to do either.

Here is a small checklist for handling exceptions:

  1. Don’t catch an exception if there is no relevant information that can be added.
  2. Don’t swallow (empty catch) if there is not a good reason to.
  3. Make sure that exception are managed at the contact point between layers so relevant information can be added

Worst of all… you couldn’t know wether it was coming from the Presentation, Business or Data layer which leads to horrible debugging.

Which brings us to the next section….

Logging / Instrumentation

When an exception thrown, you want to know why. Logging allows you to log everything at a specific location based on the project you are on (database, flat file, WMI, Event Log, etc.). Most people already log a lot of stuff in the code but… is what’s logged relevant?

Logging is only important when it’s meaningful. Bloating your software with logging won’t bring any good. Too much log and nobody will inspect the log for things that went wrong. Too few and the software could generate errors for too long before anybody realize.

I won’t go in too much detail but if you want to know code to logging ratio and the problem with logging there is many information out there.

Caching

Caching is too often considered the root of all evil. However, it is evil only if the code become unreadable and it takes a developer 4 hours to get a 1% gain.

I coded a part of an application that generated XML for consumption by a Flash application. I didn’t had any specification but I knew that if let that uncached, I would have a bug report the next day on my desk. The caching that I added helped give all flash application responsiveness while keeping the server load under control.

Caching is too often pushed back to a later time and should be considered every time a service or any dynamically generated content is served to the public. The responsiveness will grow while keeping the amount of code to what is necessary.

If you need more argument, please take a look at ASP.NET Micro Caching: Benefits of a One-Second Cache.

How do I do it?

If you haven’t been present for the last 10 years or so, I would suggest taking a look at Enterprise Library. It’s a great tool that allows you to handle all those cross-cutting concerns without having to build your own tools.

If you don’t want to use Enterprise Library, there is plenty of other framework that will allow to handle those concerns.

Just remember that good coder code, great coder reuse.

Creating a StreamProxy with Delegate/Lambda to read/write to a file

I saw a question the last time on Stackoverflow. The accepted answer was from Jon Skeet which was:

  • Event handlers (for GUI and more)
  • Starting threads
  • Callbacks (e.g. for async APIs)
  • LINQ and similar (List.Find etc)
  • Anywhere else where I want to effectively apply “template” code with some specialized logic inside (where the delegate provides the specialization)

I was once asked “What is the use of a delegate?”. The main answer that I found was “to delay the call”. Most people see delegate as events most of the time. However, they can be put to much greater use. Here is an example that I’ll gladly share with you all:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class StreamProxy<T> where T : Stream
{
private Func<T> constructorFunction;

private StreamProxy(Func<T> constructor)
{
constructorFunction = constructor;
}

public void Write(Action<StreamWriter> func)
{
using(T stream = constructorFunction())
{
StreamWriter streamWriter = new StreamWriter(stream);
func(streamWriter);
streamWriter.Flush();
}
}

public String Read(Func<StreamReader, String> func)
{
using (T stream = constructorFunction())
{
string result = func(new StreamReader(stream));
return result;
}
}

public static StreamProxy<T> Create(Func<T> func)
{
return new StreamProxy<T>(func);
}
}

To summarize what it does… it accept a delegate that returns a class that derives from “Stream” that will be used as a Constructor. It will return you a StreamProxy object that you can then use to read or write String out of that stream. What is interesting is that when it’s first created… nothing is done on the file. You are just giving instruction to the class on how to access it. When you then read/write from the file, the class knows how to manage a stream and make sure that no locks are left on the files.

Here is a sample usage of that class:

1
2
3
4
5
6
7
8
// Here I use a FileStream but it can also be a MemoryStream or anything that derives from Stream
StreamProxy<FileStream> proxy = StreamProxy<FileStream>.Create(() => new FileStream(@"C:\MyTest.txt", FileMode.OpenOrCreate));

// Writing to a file
proxy.Write(stream => stream.WriteLine("I am using the Stream Proxy!"));

// Reading to a file
string contentOfFile = proxy.Read(stream => stream.ReadToEnd());

That’s all folks! As long as you can give a Stream to this proxy, you won’t need to do any “using” in your code and everything will stay clean!

See you all next time!

Xml Serialization Made Easy

Most of the people who I’ve seen dealing with XML have some different approaches in dealing with the content.

If you want to read XML content, you normally have many way to go. Some people use XPath to retrieve the value with an XmlDocument object to retrieve what they want. Others will browse by nodes and get to what they want.

Do you see the main problem here? The problem is not that you won’t be able to read your information. The problem is the format as well as a lot of “navigation” code to get the information. The easy way to get away with conversions and navigation code is to use the XmlSerializer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[XmlRoot(ElementName = "Library")]
public class Library
{
[XmlAttribute("id")]
public int ID { get; set; }

[XmlAttribute("name")]
public string Name { get; set; }

public string ToXml()
{

StringBuilder sb = new StringBuilder();
XmlSerializer serializer = new XmlSerializer(typeof(Library));
StringWriter sw = new StringWriter(sb);
serializer.Serialize(sw, this);
return sb.ToString();
}

public static Library FromXml(string xml)
{

StringReader sr = new StringReader(xml);
XmlSerializer serializer = new XmlSerializer(typeof(Library));
Library result = serializer.Deserialize(sr) as Library;
return result;
}
}

This will easily create an XML with one root node with 2 attributes with ID and Name. The XML will also serialize any children with the proper attributes on them. This is an easy way to serialize as well as deserialize XML without having to mess-up with XPath, XmlDocument, node navigation, etc.

Little Introduction

Hi everyone,

My name’s Maxim and I’m from Montreal, Quebec in Canada.

I decided to start a blog mainly because I want to have an online presence and I think I can bring a lot of information to the community which is not currently/properly available.

The main focus will be on .NET. I will be mainly posting code, demo, links that I think are relevant and should be looked into.

I do have a true passion about technology and hope I can actually help out on this side.

See you all later