More back pedaling on .NET Core

They say great science is built on the shoulders of giants. Not here. At Aperture, we do all our science from scratch. No hand holding. - Cave Johnson

Removing project.json was explained by the sheer size of refactoring every other projects to unify in one model. It was a justified explanation.

After reading that the .NET team is removing grunt/gulp from the project templates, I’m left to wonder what is the motivation behind it. Apparently, some users had issues with it and it had to be pulled. No more details were given.

In fact, they are pulling back the old bundler/minifier into the fray to keep the bundling/minifying feature.

All this so we can do dotnet bundle without requiring other tools. Everything will be unified once more. No need for external tooling or node. Everything will be Microsoft tools.

Now to explain to my client why we are using node to automate our workflow since Microsoft definitely won’t be including it in its templates. It really makes you think twice about offering directions to clients.

Logging detailed DbEntityValidationException to AppInsights

I’ve been having some issue logging exception to AppInsights when DbEntityValidationException were thrown.

AppInsight would show me the exception with all associated details, but would not show me what validations were faulty.

It just happens that the exception will not be deserialized. They will take the Message, the StackTrace and a few other default properties, but that’s it.

So how do I get the content of EntityValidationErrors? Manually of course!

Retrieving validation errors

In my scenario, I can do a simple SelectMany since I know I’m dealing with just one entity at a time. Depending on your scenario, you should consider inspecting the Entity property instead of just using the ValidationErrors.

Here’s what I did:

1
2
3
4
5
6
7
8
9
10
11
var telemetryClient = new TelemetryClient();
try
{
// do stuff ...
}
catch(DbEntityValidationException ex)
{
Dictionary<string, string> properties = ex.EntityValidationErrors.SelectMany(x => x.ValidationErrors)
.ToDictionary(x => x.PropertyName, x => x.ErrorMessage);
telemetryClient.TrackException(ex, properties);
}

Here’s how to handle many entities:

1
2
3
4
5
6
7
foreach (var validationError in dbException.EntityValidationErrors)
{
var properties = validationError.ValidationErrors.ToDictionary(x => x.PropertyName,
x => x.ErrorMessage);
properties.Add("_EntityType", validationError.Entry.Entity.GetType().FullName);
telemetryClient.TrackException(ex, properties);
}

The only caveat would be that the exception would be logged as many time as you have invalid entities.

On that, back to tracking more exceptions!

It's the perfect time for breaking changes in .NET

Breaking changes is something that happens all the time in the Open Source Community.

We see people going from Grunt to Gulp because the new way of doing things is better. In the open-source world, project live or dies based on their perceived value.

In the .NET World, things are more stable. Microsoft ensures backward compatibility on their language and frameworks for years. People get used to seeing the same technology around and, with this, no reasons to change things since it’s going to be supported for sometimes decades.

Microsoft adoption of OSS practices, however, changed their approach to software. To become faster, things needed to be broken down and rebuilt. Changes needed to happen. To build a framework ready to support to fast-paced change of tomorrow, things we were used to are being ripped apart and rebuilt from scratch.

Not everything was removed. Some good concepts were kept, but it opened the door to changes.

Being open to change

This is the world we live in. I don’t know if it’s Microsoft’s direction, but we need to be able to stay open to change even if it breaks our stuff. Microsoft is a special island where things stay alive for way longer than sometimes they should.

In ASP.NET Core, they went so far that they had to revert some changes to be able to deliver.

Good or bad?

Here’s my opinion on the matter. Things that change too quickly can be bad for your ecosystem because people can’t find their footing and spend more time finding out what broke than delivering value.

Change is necessary. Otherwise, you end up like Java and have this monstrosity when handling date. C# isn’t too different in the collection department either.

What the future will look like

I don’t know what the team is planning. What I hope is that dead part of the framework are retired as newer versions of the framework are released.

It’s time to move the cheese and throw the dead weight overboard. Otherwise, you’re just dragging it along for the next 10 years.

Tracking your authenticated users with Azure AppInsights

Installing AppInsights is very easy to setup in your web application. Taking a few minutes to set few additional things however could really benefit you.

Once your initial AppInsights script has been initialized, if you can retrieve your authenticated userId you can add it to each request easily.

Simply add this on every page load where a user is authenticated:

1
2
var userId = 'test@example.com';
appInsights.setAuthenticatedUserContext(userId);

This will create a cookie that will track your authenticated user on each event/page view/request.

The only bug left to iron out is when 2 users alternate session on the same browser without closing the browser. See, the cookie has a Session lifetime. Most of the time, it will be ok. But let’s keep our data clean.

Every time a user is considered un-authenticated or that he is logging out, include the following:

1
appInsights.clearAuthenticatedUserContext();

This will ensure that your authenticated context (the cookie) is cleared and no misattributed events are tacked on a user.

Integrating AppInsights Instrumentation Key in an AngularJS Application

So I recently had to integrate AppInsights in an AngularJS application.

The issue I faced was that I didn’t want to pre-create my AppInsights instance and instead, rely on Azure Resource Manager to automatically instantiate them. In my previous post, I showed you how to move the Instrumentation Key directly into AppSettings. How do you get it on the client?

You could use MVC and render it directly. But thing is, the application’s architecture is that… there’s no C# running on this side of the project. No .NET at all. So how do I keep my dependencies low while still taking the AppSettings to the client?

Http Handlers

Http Handlers was first introduced at the very beginning of .NET. They are light weight, have no dependencies and are very fast.

Exactly what we need for our basic need.

Here’s the code that I used:

1
2
3
4
5
6
7
8
9
10
11
12
public class InstrumentationKeyHandler : IHttpHandler
{
public bool IsReusable => true;

public void ProcessRequest(HttpContext context)
{

var setting = ConfigurationManager.AppSettings["InstrumentationKey"];
context.Response.Clear();
context.Response.ContentType = "application/javascript";
context.Response.Write($"(function(){{window.InstrumentationKey = \'{setting}\'}})()");
}
}

Here’s how you configure it in your web.config:

1
2
3
4
5
6
7
8
<?xml version="1.0"?>
<configuration>
<system.webServer>
<handlers>
<add name="InstrumentationKey" type="MyNamespace.InstrumentationKeyHandler, MyAssembly" resourceType="Unspecified" path="InstrumentationKey.ashx" verb="GET" />
</handlers>
</system.webServer>
</configuration>

And how you use it :

1
2
3
4
5
6
7
8
9
<script src="InstrumentationKey.ashx"></script>
<script type="text/javascript">
if (window.InstrumentationKey) {
console.debug('InstrumentationKey found.');
//TODO: Insert AppInsights code here.
} else {
console.debug('InstrumentationKey missing.')
}
</script>

Why does it work?

We are relying on the browser’s basic functionality when loading scripts. All scripts will be downloaded asynchronously initially but they will always be ran sequentially.

In this scenario, we pre-load our Instrumentation Key inside a global variable and use it in the next script tag.

Server-less Office with O365

I recently had a very interesting conversation with a colleague of mine about setting up basic services for small/medium businesses.

His solution was basically to configure a server on-site and offer Exchange, DNS, Active Directory, etc. in a box.

We had some very interesting discussion but let me bring you what I suggested to him to save him time and his customer money.

Note: I’m not an O365 expert. I’m not even an IT Pro. Just a very passionate technologist.

Replacing Exchange with O365

First, Exchange is a very complicated beast to configure. Little mistakes are very time consuming to debug.

O365 comes with everything up and running. Short of configuring Quotas and a few other things, it’s already up and running.

But the most important point for someone having a client on O365 is that when there’s an issue anywhere, you don’t have to get on-site or VPN/RDP to the machine. You go to the online dashboard, you debug and you deliver value.

You save on time and help your client better use its resources.

DNS and Active Directory

When creating an O365 service, it basically ships with its own Active Directory. Windows 10 allows you to join your local machine to an Azure Active Directory domain.

No more need for a Domain Controller client-side.

What is needed?

Buy a Wireless router. Plug it in your client’s office. Once all the machines have joined the domain, they can all collaborate with each other.

If somethings goes wrong, replace the router. Everything else is cloud based.

  • File Sharing? OneDrive is included
  • Email? O365
  • Active Directory? On Azure

Why go that way?

Let’s be clear. The job is going to change. Small clients never needed the infrastructure we pushed on them earlier. At best, the server was idle and at worst that server ended up in a closet overheating or being infested by pest (not joking). There just wasn’t anything like the Cloud to provide for them.

By helping them lighten their load, we free up our own time to serve more clients and offer them something different.

If you are not doing this now, somebody else is going to offer your client that opportunity. After the paperless office, here’s the server-less office.

Importing your AppInsights Instrumentation Key directly into your AppSettings

So I had an application recently that needed to use AppInsights. The application was deployed using Visual Studio Release Management so everything needed to be deployed using the Azure Resource Management template.

Problem

You can’t specify an instrumentation key. It is created automatically upon creation.

Solution

Retrieve the key directly from within the template.

Here’s a trimmed down version of an ARM template that does exactly this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": { },
"variables": { },
"resources": [
{
"name": "[variables('WebAppName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"apiVersion": "2015-08-01",
"dependsOn": [],
"tags": { },
"properties": { },
"resources": [
{
"name": "appsettings",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [],
"tags": {},
"properties": {
"InstrumentationKey": "[reference(resourceId('Microsoft.Insights/components', variables('appInsightName')), '2014-04-01').InstrumentationKey]"
}

}

]
},

{
"name": "[variables('appInsightName')]",
"type": "Microsoft.Insights/components",
}
]
}

Key piece

See that little AppSettings named InstrumentationKey? That’s where the magic happens.

Your instrumentation key is now bound to your WebApp AppSettings without carrying magic strings.

Writing cleaner JavaScript code with gulp and eslint

With the new ASP.NET Core 1.0 RC2 right around the corner and it’s deep integration with the node.js workflow, I thought about putting out some examples of what I use for my own workflow.

In this scenario, we’re going to see how we can improve the JavaScript code that we are writing.

Gulp

This example uses gulp.

I’m not saying that gulp is the best tool for the job. I just find that gulps work really well for our team and you guys should seriously consider it.

Base file

Let’s get things started. We’ll start off the base template that is shipped with the RC1 template.

The first thing we are going to do is check what is being done and what is missing.

/// <binding Clean='clean' />
"use strict";

var gulp = require("gulp"),
    rimraf = require("rimraf"),
    concat = require("gulp-concat"),
    cssmin = require("gulp-cssmin"),
    uglify = require("gulp-uglify");

var paths = {
    webroot: "./wwwroot/"
};

paths.js = paths.webroot + "js/**/*.js";
paths.minJs = paths.webroot + "js/**/*.min.js";
paths.css = paths.webroot + "css/**/*.css";
paths.minCss = paths.webroot + "css/**/*.min.css";
paths.concatJsDest = paths.webroot + "js/site.min.js";
paths.concatCssDest = paths.webroot + "css/site.min.css";

gulp.task("clean:js", function (cb) {
    rimraf(paths.concatJsDest, cb);
});

gulp.task("clean:css", function (cb) {
    rimraf(paths.concatCssDest, cb);
});

gulp.task("clean", ["clean:js", "clean:css"]);

gulp.task("min:js", function () {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(concat(paths.concatJsDest))
        .pipe(uglify())
        .pipe(gulp.dest("."));
});

gulp.task("min:css", function () {
    return gulp.src([paths.css, "!" + paths.minCss])
        .pipe(concat(paths.concatCssDest))
        .pipe(cssmin())
        .pipe(gulp.dest("."));
});

gulp.task("min", ["min:js", "min:css"]);

As you can see, we basically have 4 tasks and 2 aggregate tasks.

  • Clean JavaScripts files
  • Clean CSS files
  • Minimize Javascript files
  • Minimize CSS files

The aggregate tasks are basically just to do all the cleaning or the minifying at the same time.

Getting more out of it

Well, that brings us to feature equality with what was available with MVC 5 with the Javascript and CSS minifying. However, why not go a step further?

Linting our Javascript

One of the most common thing we need to do is make sure we do not write horrible code. Linting is a code analysis technique that detects early problems or stylistic issues.

How do we get this working with gulp?

First, we install gulp-eslint with npm install gulp-eslint --save-dev run into the web application project folder. This will install the required dependencies and we can start writing some code.

First, let’s start by getting the dependency:

var eslint = require('gulp-eslint');

And into your default ASP.NET Core 1.0 project, open up site.js and copy the following code:

function something() {
}

var test = new something();

Let’s run the min:js task with gulp like this: gulp min:js. This will show that our file is minimized but… there’s something wrong with the style of this code. The something function should be Pascal cased and we want this to be reflected in our code.

Let’s integrate the linter in our pipeline.

First let’s create our linting task:

gulp.task("lint", function() {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(eslint({
            rules : {
                'new-cap': 1 // function need to begin with a capital letter when newed up
            }
        }))
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

Then, we need to integrate it in our minify task.

gulp.task("min:js" , ["lint"], function () { ... });

Then we can either run gulp lint or gulp min and see the result.

C:_Prototypes\WebApplication1\src\WebApplication1\wwwroot\js\site.js
6:16 warning A constructor name should not start with a lowercase letter new-cap

And that’s it! You can pretty much build your own configuration from the available ruleset and have clean javascript part of your build flow!

Many more plugins available

More gulp plugins are available on the registry. Whether you want to lint, transpile javascript (TypeScript, CoffeeScript), compile CSS (Less, SASS), minify images… everything can be included in the pipeline.

Look up the registry and start hacking away!

Creating a simple ASP.NET 5 Markdown TagHelper

I’ve been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I’ve created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
{
[HtmlTargetElement("markdown")]
public class MarkdownTagHelper : TagHelper
{
public ModelExpression Content { get; set; }
public override void Process(TagHelperContext context, TagHelperOutput output)
{
output.TagMode = TagMode.SelfClosing;
output.TagName = null;

var markdown = Content.Model.ToString();
var html = CommonMarkConverter.Convert(markdown);
output.Content.SetContentEncoded(html);
}
}
}

Inspecting the code

Let’s start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

1
2
3
4
@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>
<markdown content="Markdown"></markdown>

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we’re not going to use anyway). So now we have this:

1
<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And… that’s it. Our code is complete.

Activating it

Now you can’t just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

1
@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

1
2
3
4
public class MyClass
{
public string Markdown { get; set; }
}
1
2
3
4
5
6
7
@model MyClass
@{
ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere… you can have your user write rich text in any text box and have your application generate safe HTML.

Components used

Should our front-end websites be server-side at all?

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

You should not be using WebComponents yet

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since… well… a long time.

If you take a look at the specification, it’s hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let’s take a look at all four of them.

Reviewing the specifications

HTML Templates

Specification

This specific specification is not part of the “Web components” section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let’s not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn’t been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it’s the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it’s built by… well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we’re only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it’s a painful path.

What is wrong right now with WebComponents

First, it’s not cross platform. We handled that in the past. That’s not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there’s no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn’t move, we’re back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what “Production” is concerned, do not use WebComponents directly. Also, avoid Polymer as it’s only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Fix: Error occurred during a cryptographic operation.

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn’t the same used to decrypt?

Boom goes the dynamite.

Content In HTML? Compile it from Markdown instead.

Most content on our blogs, CMS or anything relating to content input is, today, created with WYSIWYG editors. Those are normally in a browser but can also be found in a desktop application.

Browser version

Libraries like Bootstrap WYSIWYG and TinyMCE leverage your browser to generate HTML that isn’t particularly stylized (no CSS classes or styles) instead relying on the webpage style to render properly. Those by itself are not too complicated. However, they are enclosed in the HTML semantic at the moment of writing.

When writing your content in a CMS, it will be stored as-is and re-rendered with almost the exact same HTML that you created at first (some sanitize your input and sometimes append content).

Problem for code-based blog

Most blogs devoted to blogging will contain some code at a certain point. This is where stuff starts to smell. Most WYSIWYG editors will generate the code and wrap it into pre/code or both tags. Then it will have a class attribute that will be tied to the code generated at the moment.

My blog has been migrated multiple times. At first I was on Blogger, then on BlogEngine.NET and finally on MiniBlog. All those engines stored the code as it was written with the editor. Worse even if I used Live Writer since it will not strip style attributes and other nonsense. At best, you end-up with some very horrible HTML that you have to clean in between export/import. At worse, there is some serious issue with how your post are rendered.

The content is rendered as it was written but not as it was meant to appear. Writing code without Live Writer? Well, you’ll need to write some HTML and don’t forget the proper tags and correct CSS class!

Content as source code

I see my content as source code. It should be a in a standard format that can be compiled to HTML upon my need. Did my Syntax Highlighter plugin changed and I now need to re-render my code tags? I want to just toggle some options. Not re-go through all my content doing Find & Replace.

I want to manage my content like I manage my code. A bug? Pull Request. Typo? Pull Request.

That is why my blog is going to end-up in a GitHub repository very soon. Easier to correct things.

But why not HTML?

HTML as well as it’s “human readable” really is not. Once you start creating complex content with it, you need a powerful editor to write HTML. Nothing that can be done in Notepad.

Writing a link in Markdown is something like this:

1
[My Blog](http://blog.decayingcode.com)

And in HTML it goes to this:

1
<a href="http://blog.decayingcode.com" alt="">My Blog</a>

That’s why I think that a format like Markdown is the way to go. You write the semantic you want and let the markdown renderer generate the proper HTML. Markdown is the source. HTML is the compilation.

Do I want to generate an EPUB instead? You can. The ProGit book is completely written in Markdown.

Do I want to generate a PDF instead? You also can. In fact, Pandoc supports a lot more. It got you covered for HTML, Microsoft Word, OpenOffice, LibreOffice, EPUB, TeX, PDF, etc.

If all my blog post were written in Markdown, I could go back in time and offer a PDF/EPUB version of every blog post I ever did. Not as easy with HTML if things are not standardized.

Converting HTML to Markdown

I’m currently toying with Hexo. It has many converter including one that supports RSS. It managed to import all my blog post (tags included) but I was left with a bunch of Markdown files that needed some very tough love.

Just like any legacy code, I went through my legacy writing and removed all remaining HTML left from the conversion. Most of it was removed mind you. But the code one? They could not be converted properly. I had to manually remove pre and code tag every where. Indentation also was messed up from previous import. This had to be fixed.

Right now, I regenerated a whole copy of my blog without breaking any article link. All the code has been standardized, indented and uses a plugin to display them properly. I change the theme or blog engine? I just take my MD files with me and I’m mostly good to go.

Deploying it to Azure

Once you have a working directory of hexo running, it will generate its content in a public folder.

Since we only want to deploy this, we will need to add a file name .deployment at the root of our repository.

It’s content should be:

1
2
[config]
project = public

You can find more options about this on the Kudu project page about Customizing Deployments

Issues left to resolve

Unless I’m moving to an engine like Jekyll or Octopress, most blog engines do not support Markdown files as blog input. We’re still going to have to deal with converter for the time being.

Renewed MVP ASP.NET/IIS 2015

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I’m so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let’s do it again this year! :)

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn’t work.

To make things quick, it’s all about “Deployment Slots”. Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven’t tried by the portal).

Here’s a copy of the script from Harikharan Krishnaraju for future references:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
{
$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
}

elseif ($item.ResourceType -Match "Microsoft.Web/sites")
{
$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
write-host "WebHostingPlan " $plan " under site " $item.Name ;
}
}

Switching Azure Web Apps from one App Service Plan to another

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I’m done!

But before I get into why I need to move one of them, I’ll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn’t going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It’s powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It’s heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well… I didn’t need more.

This script should be considered as "Works on my machine". Haven’t tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
            $source = $app.ServicePlanSource
            $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}

Temporarily ignore SSL certificate problem in Git under Windows

So I’ve encountered the following issue:

fatal: unable to access ‘https://myurl/myproject.git/‘: SSL certificate problem: unable to get local issuer certificate

Basically, we’re working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say “ignore it please”?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

1
$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

1
SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

The Yoda Condition

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn’t already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at “disallowYodaConditions” in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE
— Nathan Smith (@nathansmith) November 12, 2014

So… this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into “inverting the parameters compared in a conditional”.

Let’s say I have this code:

1
2
3
4
5
string sky = "blue";
if(sky == "blue"
{
    // do something
}

It can be read easily as “If the sky is blue”. Now let’s put some Yoda into it!

Our code becomes :

1
2
3
4
5
string sky = "blue";
if("blue" == sky) 
{
    // do something
}

Now our code read as “If blue is the sky”. And that’s why we call it Yoda condition.

Why would I do that?

First, if you’re missing an “=” in your code, it will fail at compile time since you can’t assign a variable to a literal string. It can also avoid certain null reference error.

What’s the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak “Yoda” with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you’re not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn’t worth the readability you are losing.