Creating and using NuGet packages with Visual Studio Team Services Package Management

The application I’m building for my client at the moment had some interesting opportunities for refactoring.

More precisely, to extract some reusable components that could be used on other projects. When you call on reusability, it begs for NuGet.

Setting the stage

All of those projects are hosted on Microsoft Visual Studio Team Services and this service has just the right extension for us. It may not be the best of all tools but for this very Microsoft focused client, it was the most cost efficient option for us.

Pricing

I won’t delve in the details but here’s the cost for us. At the time of this writing, Package Management offers 5 free users and additional feed users are at $4/months/users excluding any users on an MSDN Enterprise license.

Installing the extension

The first step is to install the extension. As with all extension, you will need to be an Administrator of your VSTS account to do that. If you are not, you can still go to the marketplace and request the plugin. It won’t actually install, but it will be visible in the extension menu of your account.

https://ACCOUNTNAME.visualstudio.com/_admin/_extensions

Make sure Package Management is installed before going further

Once your VSTS administrator approve the extension and the related fees, we’re ready to setup our feed.

Setting up my first feed

Next we’re going to pick any project and go into the Build & Release section.

Here’s a gotcha. Look at this menu.

Build & Release Menu

Everything is specific to your project. Right? Well it sure is not for Packages.

Feeds are NOT project specific. They may have permissions that makes them look project specific but they are not. They are scoped on your account. With that being said, I’m not interested in creating 1 feed per project.

I created a feed and called it Production (could also be Main, Release or the name of your favorite taco).

Click the button Connect to feed and copy the Package source URL to a Notepad. We’re going to be reusing it.

Setting up my NuGet nuspec file.

Here’s my default PROJECT.nuspec file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?xml version="1.0"?>
<package >
<metadata>
<id>$id$</id>
<version>$version$</version>
<title>$title$</title>
<authors>$author$</authors>
<owners>$author$</owners>
<projectUrl>https://ACCOUNTNAME.visualstudio.com/PROJECT/</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>$description$</description>
<copyright>Copyright 2017</copyright>
<tags></tags>
</metadata>
</package>

Yep. That easy. Now, add this file to your project that will be created into a NuGet.

For us, it’s a simple Class Library without too much dependencies so it’s not terribly complicated.

Validating your AssemblyInfo.cs

1
2
3
4
5
[assembly: AssemblyTitle("MyAwesomePackage")]
[assembly: AssemblyDescription("Please provide a description. Empty is no good.")]
[assembly: AssemblyCompany("This will be the Owner and Author")]
[assembly: AssemblyProduct("MyAwesomePackage")]
[assembly: AssemblyCopyright("Copyright © MyCompanyName 2050")]

This is the bare minimum to have everything working. Empty will not do and, well… we’re there.

Fill them up.

AssemblyInfo.cs Versioning

1
[assembly: AssemblyVersion("1.0.*")]

For public OSS package, this might be a horrendous way to do versioning. Within a company? This is totally fine for us. However, we will need to change this manually when we introduce breaking changes or major features.

By default, the $version$ token will take on AssemblyInformationalVersionAttribute, then AssemblyVersionAttribute and finally use 0.0.0.0. AssemblyFileVersion will never be used.

Read up more on the $version$ token here.

Once you are here, most of the work is done. Now is time to create the package as part of your build.

Build vs Release

Here’s what is important to get. It is totally possible to publish your package as part of your build. But we won’t be doing so. Building should create artifacts that are ready to publish. They should not be affecting environments. We may create 100s of builds in a release cycle but only want to deploy once in production.

We respect the same rules when we create our NuGet packages. We create the package in our Build. We publish to the feed in our release.

Building the NuGet package

The first thing to do is to add the NuGet Packager task and drag it to be just below your Build Solution task.

NuGet Packager task

By default, it’s going to create a package out of all csproj in your project. Do not change it for nuspec. This is not going to work. Not with our tokenized nuspec file. So just target the task to the right project. To make sure that the package is easily findable in the artifacts, I’ve added $(build.artifactstagingdirectory) to Package Folder.

NuGet Packager task

[2017-04-07] WARNING: If your package requires a version of NuGet that is higher than 2.X, you will have to specify the path to NuGet manually. The agents are not running the latest version of NuGet at the time of this writing. It does not allow to pick a specific version in the UI either. You have to provide your own.

Once your build succeed, you should able to go to your completed and see your package in the artifact explorer.

Created Package

Publishing the NuGet Package

So the package is built, now let’s push it to the feed we created.

If you are just publishing a package like me, your release pipeline should be very simple like mine.

Release Pipeline

Remember that feed URL we stored in a Notepad earlier? Now’s the time to bring it back.

The NuGet Publisher task is going to need it. Just make sure you select Internal NuGet Feed and copy/paste it in your Internal Feed URL field.

Release Pipeline

Now would be a good time to create a new release for your previous build.

NOTE: Make sure that you configure your Triggers properly. In my case, my build is backed by a Production branch so I’ve set the trigger to be Continuous Deployment. If you are not branching, you may want to launch a release manually instead.

Once your release is done, your package (seen in the Artifact Explorer) should now appear in your feed.

Created package

Using the feed

If you look at the Connect to feed, you may be tempted to try the VSTS Credential Provider. Don’t. You don’t need it. You need a file. You need a nuget.config file.

VSTS Credential Provider is only used when you want to do command line operations on the feed.

1
2
3
4
5
6
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="Production" value="https://ACCOUNT.pkgs.visualstudio.com/_packaging/Production/nuget/v3/index.json" />
</packageSources>
</configuration>

Create a nuget.config file with your feed URL (you still had it in your Notepad, right?) and just drop it beside your SLN file. If your solution was already loaded in Visual Studio, close it and re-open it.

If you right click on your solution and select Manage NuGet Packages for Solution... and expend the Package Source you will see your feed present.

All of your packages should be visible and installable from there.

Manage Packages for Solution

Are you using it?

This is my first experience creating a NuGet package in an enterprise environment. I’ve probably missed on a ton of opportunities to make the process better.

Did I miss anything? Do you have a better strategy for versioning? Let me know in the comments!

Since our authentication is already on Azure Active Directory, authentication is seamless.

What's new in .NET Core 1.0? - New Project System

There’s been a lot of fury from the general ecosystem about retiring project.json.

It had to be done if Microsoft didn’t want to work in parallel on 2 different build systems with different ideas and different maintainers.

Without restarting the war that was the previous GitHub issue, let’s see what the new project system is all about!

csproj instead of project.json

First, we’re back with csproj. So let’s simply create a single .NET Core Console app from Visual Studio that we’ll originally call ConsoleApp1. Yeah, I’m that creative.

Editing csproj

By right clicking on the project, we can see a new option.

Editing csproj

Remember when opening csproj before? You could either have the solution loaded or edit the csproj manually. Never at the same time. Of course, we would all open the file in Notepad (or Notepad++) and edit the file anyway.

Once we came back to Visual Studio however, we were prompted to reload the project. This was a pain.

No more.

Editing csproj

Did you notice something?

New csproj format

1
2
3
4
5
6
7
8
<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

</Project>

Yeah. It is. The whole content of my csproj. Listing all your files is not mandatory anymore. Here’s another thing, open Notepad and copy/paste the following into it.

1
2
3
4
public class EverythingIsAwesome
{
public bool PartOfATeam => true;
}

Now, save that file (mine is Test.cs) at the root of your project. Right beside Program.cs and swap back to Visual Studio.

Everything is awesome

That’s the kind of features my dreams are made of. No more having to Show all files, then include external files into my projects, then resolving all those merge conflicts.

Excluding files

What about the file you don’t want?

While keeping the csproj open, right click on Test.cs and exclude it. Your project file should have this added to it.

1
2
3
<ItemGroup>
<Compile Remove="Test.cs" />
</ItemGroup>

What if I want to remove more than one file? Good news everyone! It supports wildcards. You can remove, single file, folders and more.

Now remove that section from your csproj. Save. Test.cs should be back in your solution explorer.

Are you going to use it?

When new features are introduced, I like to ask people whether it’s a feature they would use or if it will impact their day to day.

So please leave me a comment and let me know if you want me to dig deeper.

Contributing to Open-Source - My first roslyn pull request - Fixing the bug

Getting your environment ready

If you haven’t done so yet, please check part 1 to get your environment ready. Otherwise, you’ll just run in circle.

Reading the bug

After getting the environment ready, I decided to actually read what the bug was all about. I did a quick glance at first but I’ll be honest… I was hooked by the “low hanging fruit” description and the “up for grabs” tag. I didn’t even understand was I was throwing myself into…

Update: Some issues marked as “up for-grab” does not mean they are easy. If you grab an issue, leave a comment and do not hesitate to ask for help. They are there to help and guide you.

Here’s the bug: dotnet/roslyn/issues/18391.

So let’s read what it is.

The Bug

So basically, there’s a refactoring when you are hiding a base member to put the new keyword on the derived property.

That refactoring was broken for fields.

It refactored to:

1
2
3
public class DerivedClass : BaseClass {
public const new int PRIORITY = BaseClass.PRIORITY + 100;
}

Instead of:

1
2
3
public class DerivedClass : BaseClass {
public new const int PRIORITY = BaseClass.PRIORITY + 100;
}

Do you see it? The order of the parameters are wrong. The new keyword MUST be before the const declaration. So the refactoring causes a compilation error. Hummm… that’s bad.

Help. Please?

That’s where Sam really helped me. It’s how it should be in any open source project. Sam pointed me in the right direction while I was trying to understand the code and find a fix.

The problematic code

HideBaseCodeFixProvider.AddNewKeywordAction.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
private SyntaxNode GetNewNode(Document document, SyntaxNode node, CancellationToken cancellationToken)
{

SyntaxNode newNode = null;

var propertyStatement = node as PropertyDeclarationSyntax;
if (propertyStatement != null)
{
newNode = propertyStatement.AddModifiers(SyntaxFactory.Token(SyntaxKind.NewKeyword)) as SyntaxNode;
}

var methodStatement = node as MethodDeclarationSyntax;
if (methodStatement != null)
{
newNode = methodStatement.AddModifiers(SyntaxFactory.Token(SyntaxKind.NewKeyword));
}

var fieldDeclaration = node as FieldDeclarationSyntax;
if (fieldDeclaration != null)
{
newNode = fieldDeclaration.AddModifiers(SyntaxFactory.Token(SyntaxKind.NewKeyword));
}

//Make sure we preserve any trivia from the original node
newNode = newNode.WithTriviaFrom(node);

return newNode.WithAdditionalAnnotations(Formatter.Annotation);
}

We are not interested in the first 2 parts (propertyStatement, methodStatement) but in the 3rd one. My first impression was that const was considered a Modifier and that new was also a modifier. The normal behavior or List.Add(...) is to append at the end (const new).

That’s where the bug is.

The Solution

Well… I’m not going to walk you through the whole process but basically, I tried to do the job myself by trying to insert the new keyword at Position 0 in an array. After a few back and forth with Cyrus Najmabadi, we finished on a syntax that should use their new SyntaxGenerator.

The SyntaxGenerator will ensure that all modifiers are in the proper order. Also, it works for all properties. Here is the refactored code once all the back and forth were done.

1
2
3
4
5
private SyntaxNode GetNewNode(Document document, SyntaxNode node, CancellationToken cancellationToken)
{

var generator = SyntaxGenerator.GetGenerator(_document);
return generator.WithModifiers(node, generator.GetModifiers(node).WithIsNew(true));
}

Wow. Ain’t that pretty.

Unit Testing

HideBaseTests.cs

A big part of Roslyn are the tests. You can’t write a compiler and have zero tests. There’s enough test in there to melt a computer.

The biggest amount of back and forth we did were on the tests. In fact, I added more lines of tests in this than actual production code.

Here’s a sample test I did:

1
2
3
4
5
6
7
8
9
10
11
12
[WorkItem(14455, "https://github.com/dotnet/roslyn/issues/14455")]
[Fact, Trait(Traits.Feature, Traits.Features.CodeActionsAddNew)]
public async Task TestAddNewToConstantInternalFields()
{

await TestInRegularAndScriptAsync(
@"class A { internal const int i = 0; }
class B : A { [|internal const int i = 1;|] }
",
@"class A { internal const int i = 0; }
class B : A { internal new const int i = 1; }
")
;

}

First, they are using xUnit so you have to brand everything with FactAttribute and TraitAttribute to properly category the tests.

Second, if you add a test that fixes a GitHub issue, you have to add the WorkItemAttribute the way I did it in this code. If you do this, Your test work.

Finally, the [| ... |] syntax. I don’t know anything about it but my assumption at the time was that it was what we were going to refactor and how we were expecting it to be handled.

Testing in Visual Studio

Remember when we installed the Visual Studio Extensibility Workload?

That’s where we use it. In the Roslyn.sln opened solution, find the project VisualStudioSetup and make it your start-up project. Hit F5.

A new instance of Visual Studio will be launched and you will be able to test your newly updated code. Launch a separate instance of Visual Studio.

You now have 1 experimental Visual Studio with your fixes and 1 standard instance of Visual Studio without your fixes.

You can now write the problematic code in the standard instance and see the problem. Copy/paste your original code in the experimental instance and marvel at the beauty of the bug fix you just created.

Creating the pull request

As soon as my code was committed and pushed to my fork, I only had to create a pull request from the GitHub interface. Once that pull request is created, any commits that you create on top of your fork will be included in this pull request.

This is where the back and forth with the .NET Team started truly.

What I learned

Building the compiler isn’t easy. Roslyn is a massive 200 projects solution that takes a lot of time to open. Yes. Even on the Surface Book pimped to the max with an SSD, an i7 and 16GB of RAM. I would love to see a better compartmentalization of projects so I don’t open those 200 projects all at once just to build a UI fix.

Sam and I had to do a roundtrip on the WorkItemAttribute. It wasn’t clear how it should be handled. So he created a pull request to address that.

Talking about tests, the piped string notation was really foreign to me and took me a while to understand. Better documentation should be among the priorities.

My experience

I really loved fixing that bug. It was hard, then easy, then lots of back and forth.

In the end, I was pointed in the right direction and I really want to tackle another one.

So my thanks to Sam Harwell and Cyrus Najmabadi for carrying me through fixing a refactoring bug in Visual Studio/Roslyn. If you want contributors, you need people like them to manage your open source project.

Will you contribute?

If you are interested, there is a ton of issues that are up-for-grabs on the roslyn repository. Get ready. Take one and help make the Roslyn Compiler even better for everybody.

If you are going to contribute, please let me know in the comment! If you need more details, please also let me know in the comments.

Contributing to Open-Source - My first roslyn pull request - Getting the environment ready

I’ve always wanted to contribute back to the platform that constitute the biggest part of my job.

However, the platform itself is huge. Where do you start? Everyone says do documentation. Yeah. I’ve done that in the past. Those are easy pick. Everyone can do them. I’ve wanted to contribute real actual code that would help developers.

Why now?

I had some time around lunch and I saw this tweet by Sam Harwell:

Low-hanging fruit. Compiler. Actual code.

What could possibly go wrong?

The Plan

The plan was relatively simple. I can’t contribute if I can’t get the code compiling on my machine and the test running.

So the plan went as such:

  1. Create a fork of roslyn.
  2. git clone the repository to my local machine
  3. Follow the instructions of the repository. Well… the master branch instructions
  4. Test in Visual Studio
  5. Create a pull request

Getting my environment ready

I have a Surface Book i7 with 16Gb of RAM. I also have Visual Studio 2017 Enterprise installed with the basic .NET and WebDev workload installed.

Installing the necessary workloads

The machine is strong enough but if I wanted to test my code in Visual Studio, I would need to install the Visual Studio extension development workload.

This can easily be installed by opening the new Visual Studio Installer.

Forking the repository

So after installing the necessary workloads, I headed to GitHub and went to the roslyn repository and created a fork.

Forking Roslyn

Cloning the fork

You never want to clone the main repository. Especially in big projects. You fork it, and submit pull request from the fork.

So from my local machine I ran git clone from my fork:

Cloning Roslyn

What it looks like for me:

1
git clone https:github.com/MaximRouiller/roslyn.git

This may take a while. Roslyn is massive and there is ton of code. Almost 25k commits by the time of this post.

Running the Restore/Build/Test flow

That’s super simple. You go to the roslyn directory with a prompt and you run the following command sequentially.

  • Restore.cmd
  • Build.cmd
  • Test.cmd

If your machine doesn’t use 100% CPU for a few minutes, you may have a monster of a machine. This took some a time. Restore and build were not too slow but the tests? Oh god… it doesn’t matter what CPU you have. It’s going down.

As I didn’t want to spend another 30 minutes re-running all the tests, Sam Harwell suggested to use the xunit.runner.wpf to run specific tests as to avoid re-running the world. Trust me. Clone and build that repository. It will save you time.

And that completes the Getting the environment ready.

Time to actually fix the bug now.

Fixing the bug

Stay with us for part 2 where we actually get to business.

What's new in VS2017? - Visual Studio Installer

When installing Visual Studio in the past, you would be faced with a wall of Checkboxes that would leave wondering what you needed for what.

Introducing Workloads

Visual Studio 2017 workloads

Workloads are an easy way to select what kind of work you are going to do with Visual Studio. .NET Desktop development? Windows Phone? Web? All are covered.

If you look at that screenshot, you can see something new in there that wasn’t included previously. Azure. No more looking around for that Web Platform Installer to install the proper SDK for you.

You can access it directly from the installer. But what happens once you’re done and you want to modify your workloads?

If you start working on Visual Studio extensions, you need to be able to install that too.

Access the Installer after installation

There is two ways.

The first is to just press your Windows key and type Visual Studio Installer. Once the window is opened, you click the little hamburger menu and click modify.

The second is to access it through the File > New Project... menu.

Visual Studio 2017 workloads through projects

By clicking this link, the installer will open for you without having to go through the hamburger menu. Just pick your features.

Does that matter for you?

How do you like the new installer? Is it more user friendly than what was before?

What else could be improved? Let me know in the comments.

What's new in VS2017? - Lightweight Solution Load

There’s a brand new option in Visual Studio 2017 that many users might overlook. It’s called Lightweight Solution Load.

While opening solutions in most project is relatively simple, some just simply aren’t.

If you take a solution like roslyn; this project has around 200 projects. That is massive. To be fair, you don’t need to be that demanding to see performance degradation in Visual Studio. Even if Visual Studio 2017 is faster than 2015, huge solutions can still take a considerable amount of time to load.

This is where Lightweight Solution Load comes into play.

What is lightweight solution load?

Once this option is turned on, Visual Studio will stop pre-loading all projects completely but instead rely on the minimal amount of information needed to have it functional in Visual Studio. Files will not be populated until the project is expanded and other dependencies that are not required yet will also not be loaded.

This allows for you to open a solution, expand a project, edit a file, recompile, and be on your way.

How to turn it on?

There’s two ways you can turn it on. Individually for a single solution by right clicking on a solution and selecting this option:

Turning it on for an Individual Solution

Or globally for all future solutions that are going to be loaded by opening your Tools > Options... menu:

Turning it on for All Solutions

What is the impact?

Beside awesome performance? There might be Visual Studio features that will just not work unless a project is fully loaded. Please see the list of known issues for the current release.

Alternative

If you do not want to play with this feature but you still find your solution to be too long to load, there’s an alternative.

Break up your solutions in different chunks. Most applications can be split into many smaller size solutions. This reduces load time as well as faster compile time in general.

Are you going to use it?

When new features are introduced, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if this feature is going to be used on your projects. How many projects do you normally have in a solution?

What's new in VS2017 and C# 7.0? - Throw Expression

throw has been a keyword since the first version of C#. The way the developer interact with it hasn’t been touched ever since.

Sure, lots of new features has been brought on board since but… throw? Never touched until now.

C# 6.0

I do not really need to tell you how to throw an exception. There’s 2 ways.

1
2
3
4
5
6
7
8
9
10
11
12
13
public void Something()
{

try
{
// throwing an exception
throw new Exception();
}
catch(Exception)
{
// re-throw an exception to preserve the stack trace
throw;
}
}

And that was it. If you wanted to throw an exception any other way,

C# 7.0

All you see below are invalid before C# 7.0.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Dummy
{
private string _name;

public Dummy(string name) => _name = name ?? throw ArgumentNullException(nameof(name));
public void Something(string t)
{

Action act = () => throw new Exception();
var nonNullValue = t ?? throw new ArgumentNullException();
var anotherNonNullValue = t != null ? t : throw new ArgumentNullException();
}

public string GetName() => return _name;
}

The difference

Oh so many of them. Here’s what was included in the previous snippet of code.

You can now throw from:

  • Null-Coalescing operator. The ?? operator used to provide an alternative value. Now you can use it throw when value shouldn’t be null.
  • Lambda. Still don’t understand exactly why it wasn’t allow before but now? Totally legit.
  • Conditional Operator. Throw from the left or the right of the ?: operator anytime you feel like it. Before? Not allowed.
  • Expression Body. In fact, any expression body will support it.

Where you still can’t throw (that I verified):

  • if(condition) statement. Cannot be used in the condition of a if statement. Even if it’s using a null-coalescing operator, it won’t work and is not valid syntax.

Are you going to use it?

I know that not everyone will necessarily use all of these new form of expression body. But I’m interested in your opinion.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - More Expression Body

Expression Bodied functions is a relatively new concept brought to use in C# 6.0 to facilitate the creation of properties that included too much ceremony.

C# 7.0 remove even more ceremony from many more concepts.

C# 6.0

Previously, C# 6.0 introduced the concept of Expression bodies functions.

Here are a few examples.

1
2
3
4
// get only expression bodied property
public string MyProperty => "Some value";
// expression bodies method
public string MyMethod(string a, string b) => return a + b;

C# 7.0

With the new release of C# 7.0, the concept has been added to:

  • constructors
  • destructors
  • getters
  • setters

Here are a few examples.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class TestClass
{
private string _name;

// expression-bodied constructor
public TestClass(name) => _name = name;
// expression-bodied destructor
~TestClass() => _name = null;

public string Name
{
get => _name;
set => _name = value;
}
}

The difference

The main difference in your code will be the amount of line of codes that will be used for useless curly braces or plumbing code.

Yet again, this new version of C# offer you more ways to keep your code concise and easier to read.

Are you going to use it?

I know that not everyone will necessarily use all of these new form of expression body. But I’m interested in your opinion.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Local Functions

Local Functions is all about declaring functions within functions. See them as normal functions with a more restrictive scope than private.

In C# 6.0, if you needed to declare a method within a method to be used within that method exclusively, you created a Func or Action.

C# 6.0 - Local Functions (before)

But here are some issues with Func<>/Action<>, first they are an object. Not a function. Every time you declare a Func, you allocate memory for the variable and that could put unnecessary pressure in your environment.

Second, Func cannot call themselves (also know as recursion) and finally, they have to be declared before you use them just like any variables.

1
2
3
4
5
6
7
8
9
10
public void Something(object t)
{

// allocate memory
Func<object> a = () => return new object();

Func<int, int> b = (i) => {
//do something important
return b(i); // <=== ILLEGAL. Can't invoke itself.
};
}

Here’s the problem with those however… if you do not want to allocate the memory or if you need to du recursion, you need to move the method to an external method and scope it properly (private).

Doing that however, your method becomes available to the whole class to use. That’s not good.

C# 7.0 - Local Functions

1
2
3
4
5
6
7
8
9
10
public bool Something(object t)
{

return MyFunction(t);


bool MyFunction(object t)
{

// return value based on `t`
}
}

This is how a local function is declared in C# 7.0. It works the same way as a lambda but without allocation and without exposing private functions that shouldn’t be exposed.

The difference

The main difference are:

  • No memory allocation. Pure function that is just ready to be invoked and won’t be reallocated every time the method is called.
  • Recurse as much as you want. Since it’s a normal method, you can use recursion just like any other methods.
  • Use it before declaring it. Just like any other methods in a class, you can use it before (as in: line of codes) it is actually declared. Variables need to be declared before they are used.

Quickly, see it as normal method with a more aggressive scoping than private.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

BrowserStack and Microsoft Partnership - Testing Microsoft Edge has never been that easy!

Microsoft just announced a partnership with BrowserStack to provide free testing of Microsoft Edge.

You probably already know that BrowserStack is the place to go when you need to test multiple browsers without installing them all on your machine.

Today, you can expand your manual and automated test to always include Microsoft Edge.

Not only that, they include 3 channels: Latest 2 stable versions as well as the preview (available to Insiders).

This will enable you to ensure that your application works on most Edge browsers as well as future proofing yourself with the next release.

Try it now!

What's new in VS2017 and C# 7.0? - Tuples

C# 7.0 introduces tuples for the first time. Although present in many other languages and introduced first as a generic type (System.Tuple<...>) before, it wasn’t until C# 7.0 that it was actually included into the language specification.

The raison-d’être of the Tuple is to return 2 values at the same time from a method.

C# 6.0

Many solutions were provided to us before.

We could use:

  • Out-Parameters but they are usable in async methods so it’s one solution.
  • System.Tuple<...> but just like Nullable<...>, it’s very verbose.
  • Custom type. But now you are creating classes that will never be reused ever again.
  • Anonymous types. But you were required to use dynamic which add a huge performance overheard everytime it’s used.

C# 7.0 - Defining tuples

The simplest use is like this:

1
2
3
4
5
6
7
8
9
10
11
public (string, string) Something()
{
// returns a literal tuple of strings
return ("Hello", "World");
}

public void UsingIt()
{

var value = Something();
Console.WriteLine($"{value.Item1} {value.Item2}")
}

Why stop there? If you don’t want to return ItemX as their name, you can customize it two different way.

1
2
3
4
5
6
7
8
9
public (string hello, string world) NamedTupleVersion1()
{
//...
}

public (string, string) NamedTupleVersion2()
{
return (hello: "Hello", world: "World");
}

The difference

The difference is simpler code, less out usage and less dummy classes that are only used to transport simple values between methods.

Advanced scenarios

C# 7.0 - Deconstructing tuples (with and without type inference)

When you invoke 3rd party library, tuples will already be with either their name or in a very specific format.

You can deconstruct the tuple and convert it straight variables. How you ask? Easily.

1
2
3
4
5
6
7
8
var myTuple = (1, "Maxime");

// explicit type definition
(int Age, string Name) = myTuple
// with type inference
var (Age, Name) = myTuple;

Console.WriteLine($"Age: {Age}, Name: {Name}.");

If you take the previous example, normally, you would need to access the first property by using myTuple.Item1.

Hardly readable. However, we created the Age variable easily by deconstructing it. Wherever the tuple come from, you can easily deconstruct it in one line of code with or without type inference.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Pattern Matching

C# 7.0 introduces pattern matching. Well, compared to other features, this require a little bit of explanation.

There is many types of pattern matching and three are supported in C# 7: Type, Const, Var.

If you used the is keyword before, you know it test for a certain type. However, you still needed to cast the variable if you wanted to use it. That alone made the is statement completely irrelevant and people preferred to cast and check for null rather than check for types.

C# 6.0 - Type Pattern Matching (before)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public void Something(object t)
{

var str = t as string;
if(str != null)
{
//do something
}

var type = t.GetType();

if (type == typeof(string))
{
var s = t as string;
}
if (type == typeof(int))
{
var i = t as int;
}
}

C# 7.0 - Type Pattern Matching

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public void Something(object t)
{

if(t is string str)
{
//do something
}

switch(t)
{
case string s:
break;
case int i:
break;
//...
case default:
break;
}
}

The difference with Type Pattern Matching

This saves you one line in a pattern that is common and repetitive way too often.

More pattern matching

C# 7.0 - Const Pattern Matching

Const pattern is basically checking for specific value. That includes null check. Other constant may also be used.

1
2
3
4
5
6
7
8
9
10
11
public void Something(object t)
{

if(t is null){ /*...*/ }
if(t is 42) { /*...*/}
switch(t)
{
case null:
// ...
break;
}
}
C# 7.0 - Var pattern Matching

This is a bit more weird and may look completely pointless since it will not match any types.

However, when you couple it with the when keyword… it where magic starts coming.

1
2
3
4
5
6
7
8
9
10
11
12
13
private int[] invalidValues = [1,4,7,9];
public bool IsValid(int value)
{

switch(value)
{
case var validValue when( !invalidValues.Contains(value)):
break;

case var invalidValue when( invalidValues.Contains(value)):
break;

case default:
break;
}
}

Of course, this example is trivial but add some real-life Line Of Business applications and you end up with a very versatile of putting incoming values into the proper bucket.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Out Variables

When using certain APIs, some parameters are declared as out parameters.

A good example of this is the Int32.TryParse(string, out int) method.

So let’s check the difference in invocation between C# 6.0 and C# 7.0

C# 6.0

1
2
3
4
5
6
7
8
public void DoSomething(string parameter)
{

int result;
if(Int32.TryParse(parameter, out result))
{
Console.WriteLine($"Parameter is an int and was parsed to {result}");
}
}

C# 7.0 (with type inference)

1
2
3
4
5
6
7
8
9
10
11
12
13
public void DoSomething(string parameter)
{

if(Int32.TryParse(parameter, out int result))
{
Console.WriteLine($"Parameter is an int and was parsed to {result}");
}

// w/ type inference
if(Int32.TryParse(parameter, out var i))
{
// ....
}
}

The difference

Now you don’t need to define the variable on a separate row. You can inline it directly and, in fact, you could just use var instead of int in the previous example since it can infer the type directly inline. This is called type inference.

It is important to note however that the variable is scoped on the method and not the if itself. So the result parameter is available in both the if and the else scope.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if you are going to use inline out variables.

What's new in VS2017 and C# 7.0? - Literals

Literals

C# 6.0

The C# 6.0 way to define integers in .NET is to just type the number and there’s no magic about it.

You can assign it directly by typing the number or if you have a specific hexadecimal, you can use the literal annotation 0x to define it.

Not teaching anyone anything new today with the following piece of code.

1
2
int hexa = 0x12f4b12a;
int i = 1235;

C# 7.0

Now in C# 7.0, there’s support for binary annotations. If you have a specific binary representation that you want to test, you can use the literal annotation 0b to define it.

1
var binary = 0b0110011000111001;

Another nice feature that is fun to use is the separator. It was supposed to be introduced in C# 6.0 but was delayed to 7.0.

They do not affect the value in any way and can be applied to any literals.

1
2
3
var hexa = 0x12f4_b12a;
var binary = 0b0110_0110_0011_1001;
var integer = 1_000_000;

They can be applied any where and will not impact the evaluation of the number.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if binary or separators are a feature that will be used.

Creating my first .NET Core app running on Linux with Docker

I’ve always liked the idea of running .NET Core on multiple platforms. I’ve never had the guts however to jump cold feet into a Linux installation only on my machine.

Since Windows 10 added support for Hyper-V and Docker a while ago and with the release of the tooling for .NET Core with VS2017, I decided to give it another go.

Here is what you will need to follow along before we get any further.

Requirements

  • Windows 10 64bits Pro and higher. This does not work with anything less.
  • If you haven’t installed Hyper-V, Docker will prompt you to install it for you and reboot your machine. Save your work!
  • Install Docker for Windows (I went with stable) and follow the steps

Making sure your C drive is shared

The last bit of requirement to ensure that your machine is ready to run .NET Core apps is to ensuring that your C drive is shared.

Once you install Docker for Windows for the first time, make sure to go in the notification tray and right click on the whale.

The whale

Once the contextual menu pops up, select the settings option:

The setting

Finally, go into the Shared Drives menu on the left and ensure that the C drive is shared. It will prompt your for your password.

Shared Drives

Click on Apply and now we are ready.

Creating a docker application

Once our little pre-requisite are satisfied, the steps are really easy to go from there.

We will create a new ASP.NET Core Web Application and making sure that we enable docker support.

New App

If you missed the previous step, it’s always possible to enable docker support once the application is created by right clicking on your project and clicking Add > Docker Support.

Adding docker support

Whatever path you took, you should now have 2 projects in your solution. Your initial project and a docker-compose project.

docker-compose

Testing out our application

The first modification that we will do to our application is add a line in our /Views/Home/Index.cshtml file.

1
<h1>@System.Runtime.InteropServices.RuntimeInformation.OSDescription</h1>

I’ve added it to the top to make sure it works.

First, select your project and ensure it starts in either Console or in IIS Express mode and press F5. Once the application is launched, you should see something like this:

windows-run

Now, select the docker-compose project and press F5. Another Window should open up and display something like this:

docker-run

The OS Description might not be exactly this but you should see “Linux” in there somewhere. And… that’s it!

You have officially a multiplatform .NET Core application running on your Windows 10 machine.

Conclusion

Without knowing anything about how docker works, we managed to create a .NET Core application and have it run in both Windows and Linux in less than 15 minutes.

Since, it’s not because you can than you should, I highly recommend that you read up on docker to ensure that it’s the proper tool for the job.

In fact, reading on the whole concept of containers would be advised before going with both feet in.

If you are interested in seeing how we can deploy this to the cloud, let me know in the comments!

Adding TFS to your Powershell Command Line

UPDATE: Added VS2017 support

If you are mostly working with other editors than Visual Studio but still want to be able to use TFS with your team mates, you will need a command line solution.

The first thing you will probably do is to do a Google Search and find where the command line utility is located.

C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\tf.exe

Now, you could simply add the folder to your %PATH% and make it available to your whole machine. But… what about setting an alias instead? Basically, just import this specific command without importing the whole folder.

PowerShell

First, run this notepad $PROFILE. This will open your PowerShell profile script. If the file doesn’t exist, it will prompt you to create it.

Once the file is opened, copy/paste the following line:

1
Set-Alias tf "$env:VS140COMNTOOLS..\IDE\tf.exe"

If you have a different version of Visual Studio installed, you may need to change the version of the common tools.

This is easily the easiest way to get Team Foundation Services added to your command line without messing with your PATH variable.

Tools Versions

Name Version Tools Variable
Visual Studio 2010 10.0 VS100COMNTOOLS
Visual Studio 2012 11.0 VS110COMNTOOLS
Visual Studio 2013 12.0 VS120COMNTOOLS
Visual Studio 2015 14.0 VS140COMNTOOLS

Handling Visual Studio 2017

The way Visual Studio 2017 has been re-organized, there is no more global environment variables laying around.

The tf.exe location is now there. I haven’t found an easy way to link to it but to use the full path. Please note that the path bellow will vary based on your edition of Visual Studio.

C:\Program Files (x86)\Microsoft Visual Studio\2017\\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\tf.exe

So for my scenario (with Enterprise installed), the alias would be set as:

1
Set-Alias tf "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\tf.exe"

Testing it out

If you run tf get on a source controlled folder, you should see changes be brought back to your folder.

Is there any other tools that you are using that are not registered in the default PATH? Leave a comment and let everybody know!

If you want to know more about how to use the tf command, you should definitely take a look at the list of commands.

TFVS Command Reference

Managed Disk is now in GA - Convert all your VMs now!

Alright so this is kind of a big deal. This has been covered in the past but since this just hit General Availability, you need to get on this now. Or better, yesterday if you have access to a time machine.

Before Managed disks, you had to create an Azure Storage Account for each VM to avoid IOPS limits. But this wasn’t enough to keep your VMs up and running. You had to also manage availability sets.

This has led a some people as far away from VM as possible. But if you consider the insane advantages of VM Scale Sets (read: massive scale-out, massive machine specs, etc.), you don’t want to avoid this card in your solution deck. You want to embrace it. But if you start embracing VMs, you have to start dealing with the Storage Accounts and the Availability Sets and, let’s be honest, it was clunky.

Today, no more waiting. It’s generally available and it’s time to embrace it.

Migrating Existing VMs to Managed Disks

Note this code is taken from the sources bellow.

To convert single VMs without an availability set

1
2
3
4
5
6
7
# Stop and deallocate the VM
$rg = "MyResourceGroup"
$vm = "MyMachine"
Stop-AzureRmVM -ResourceGroupName $rg -Name $vm -Force

# Convert all disks to Managed Disks
ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rg -VMName $vm

To Convert VMs within an availability set:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$rgName = 'myResourceGroup'
$avSetName = 'myAvailabilitySet'

$avSet = Get-AzureRmAvailabilitySet -ResourceGroupName $rgName -Name $avSetName

Update-AzureRmAvailabilitySet -AvailabilitySet $avSet -Managed

foreach($vmInfo in $avSet.VirtualMachinesReferences)
{
$vm = Get-AzureRmVM -ResourceGroupName $rgName | Where-Object {$_.Id -eq $vmInfo.id}
Stop-AzureRmVM -ResourceGroupName $rgName -Name $vm.Name -Force
ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rgName -VMName $vm.Name

}

Source 1
|
Source 2

Here’s some more resource that may help you convert your VMs.

If you are using the Azure CLI, they updated their tooling to allow you to manage the “Managed Disks”.

If you rather use C# to manage your application, the .NET Azure SDK is also up to date.

Finally, if you want to start playing with VM Scale Sets and Managed Disks, here’s a quick “how to”.

Are Managed Disks something that you waited for? Let me know in the comments!

Setting up ESLint rules with AngularJS in a project using gulp

When creating Single Page Application, it’s important to keep code quality and consistency at a very high level.

As more and more developer work on your code base, it may seem like everyone is using a different coding style. In C#, at most, it’s bothersome. In JavaScript? In can be down right dangerous.

When I work with JavaScript projects, I always end up recommending using a linter. This will allow the team to make decisions about coding practices as early as possible and keep everyone sane in the long run.

If you don’t know ESLint yet, you should. It’s one of the best JavaScript linter available at the moment.

Installing ESLint

If your project is already using gulp to automate the different work you have to do, eslint will be easy to setup.

Just run the following command to install all the necessary bits to make it run-able as a gulp task.

1
npm install eslint eslint-plugin-angular gulp-eslint

Alternatively, you could also just install eslint globally to make it available from the command line.

1
npm install -g eslint

Configuring ESLint

Next step is to create a .eslintrc.json at the root of your project.

Here’s the one that I use.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"env": {
"browser": true
}
,

"globals":{
"Insert Global Here": true
}
,

"extends": "angular",
"rules": {
"linebreak-style": [
"error",
"windows"
],

"semi": [
"error",
"always"
],

"no-console": "off",
"no-debugger": "warn",
"angular/di": [ "error", "$inject" ]
}

}

First, setting up the environment. Setting browser to true will import a tons of globals (window, document, etc.) that tells ESLint that the code is running inside a browser and not in a nodejs process for instance.

Next are globals. If you are using libraries that defines globals and that you are using those globals, it’s where you would define them. (Eg.: jQuery, $, _)

extends will allow you to define the base rules that we will follow. angular basically enables the plugin we have as well as all the basic JavaScript rules defined by default.

rules will allow you to customize the rules to your liking. Personally, I don’t like seeing the console and debugger errors so I adjusted them the way I like. As for angular/di, it allows you to set your prefered way of doing dependency injection with Angular. Anything that is not service.$inject = [...] will get rejected in my code base.

Sub-Folder Configuration

Remember that you can always add rules for specific folders. As an example, I often have a service folder. This folder only contains services but the rule angular/no-service-method will have an error for each of them.

Creating an .eslintrc.json file in that folder with the following content will prevent that error from ever showing up again.

1
2
3
4
5
{
"rules": {
"angular/no-service-method": "off"
}

}

Creating a gulp task

The gulp task itself is very simple to create. The only thing that is left to pick, is the format in which to display the errors.

You can pick from many formatters that are available.

1
2
3
4
5
6
7
8
var eslint = require("gulp-eslint");
var src = './app/';

gulp.task('eslint', function () {
return gulp.src(src + '**/*.js')
.pipe(eslint())
.pipe(eslint.format('stylish'));
});

Customizing your rules

As with every team I collaborate with, I recommend that everyone sits down and define their coding practices so that they may be no surprise.

Your first visit is to check the available rules on ESLint website. Everything with a checkmark is enabled by default and will be considered an error.

I wouldn’t take too much time going through the list. I would, however, run eslint on your existing code base and see what your team consider errors and identify cases where something should be errors.

  • Does extra-parenthesis gets on everyone’s nerves? Check out no-extra-parens
  • Does empty functions plaguing your code base? Check out no-empty-function
  • Do you consider using eval() a bad practice (you should!)? Add no-eval to the rulebook!

Improving your code one step at a time

By implementing a simple linter like ESLint, it’s possible to increase your code quality one step at a time. With the angular plugin for eslint, it’s also now possible to improve your Angular quality at the same time.

Any rules you think should always be enabled? What practices do you use to keep your code base clean and bug free? Let me know in the comments!

Angular 1.5+ with dependency injection and uglifyjs

Here’s a problem that doesn’t come too often.

You build your own build pipeline with AngularJS and you end-up going in production with your development version. Everything runs fine.

Then you try your uglified version and… it fails. For the fix, skip to the end of the article. Otherwise? Keep on reading.

The Problem

Here’s some Stack Trace you might have in your console.

Failed to instantiate module myApp due to:

Error: [$injector:unpr] http://errors.angularjs.org/1.5.8/$injector/unpr?p0=e

and this link shows you this:

Unknown provider: e

Our Context

Now… in a sample app, it’s easy. You have few dependencies and finding them will make you go through a few files at most.

My scenario was in an application with multiple developers after many months of development. Things got a bit sloppy and we made decisions to go faster.

We already had practices in place to require developers to use explicit dependency injection instead of implicit. However, we didn’t have anything but good faith in place. Nothing against human mistake or laziness.

Implicit vs Explicit

Here’s an implicit injection

1
2
3
4
angular.module('myApp')
.run(function($rootScope){
//TODO: write code
});

Here’s what it looks like explicitly (inline version)

1
2
3
4
angular.module('myApp')
.run(['$rootScope', function($rootScope){
//TODO: write code
}]);

Why is it a problem?

When UglifyJS will minify your code, it will change variable names. Names that AngularJS won’t be able to match to a specific provider/injectable. That will cause the problem we have where it can’t find the right thing to inject. One thing that UglifyJS won’t touch however is strings. so the '$rootScope' present in the previous tidbit of code will stay. Angular will be able to find the proper dependency to inject. And that, even after the variable names get mangled.

The Fix

ng-strict-di will basically fails anytime it finds an implicit declaration. Make sure to put that into your main Angular template. It will save you tons of trouble.

1
2
3
<html ng-app="myApp" ng-strict-di>
...
</html>

Instead of receiving the cryptic error from before, we’ll receive something similar to this:

Uncaught Error: [$injector:modulerr] Failed to instantiate module myApp due to:

Error: [$injector:strictdi] function(injectables) is not using explicit annotation and cannot be invoked in strict mode

Enable Transparent Data Encryption on SQL Azure Database

Among the many recommendations to make your data secure on Azure, one is to implement Transparent Data Encryption.

Most of the ways you’ll see online to enable it is to run the following command in SQL:

1
2
3
-- Enable encryption  
ALTER DATABASE [MyDatabase] SET ENCRYPTION ON;
GO

While this may be perfectly valid for existing database, what if you want to create it right from the start with TDE enabled?

That’s where ARM templates normally come in. It’s also where the documentation either fall short or isn’t meant to be used as-is right now.

So let me give you the necessary bits for you to enable it.

Enabling Transparent Data Encryption

First, create a new array of sub resources for your database. Not your server. Your database. It’s important otherwise it just won’t work.

Next, is to create a resource of type transparentDataEncryption and assign the proper properties.

It should look like this in your JSON Outline view.

Enabling Transparent Data Encryption on Azure

I’ve included the database ARM template I use for you to copy/paste.

ARM Template

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
"name": "[variables('databaseName')]",
"type": "databases",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "Database"
}
,

"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', variables('sqlserverName'))]"
],

"properties": {
"edition": "[parameters('edition')]",
"collation": "[parameters('collation')]",
"maxSizeBytes": "[parameters('maxSizeBytes')]",
"requestedServiceObjectiveName": "[parameters('requestedServiceObjectiveName')]"
}
,

"resources": [
{
"name": "current",
"type": "transparentDataEncryption",
"dependsOn": [
"[variables('databaseName')]"
],

"location": "[resourceGroup().location]",
"tags": {
"displayName": "Transparent Data Encryption"
}
,

"apiVersion": "2014-04-01",
"properties": {
"status": "Enabled"
}

}

]
}

Want more?

If you are interested in more ways to secure your data or your application in Azure, please let me know in the comments!