Resolved: AppInsights moved to EastUS, deployment failing with CentralUS message

TLDR AppInsights were moved to EastUS. AutoScale Settings and Alerts were kept in CentralUS causing a chain-reaction of failure all around. Code to clean-up available at end of post.

As AppInsights hit General Availability at Microsoft Connect 2016, a few issues were introduced that caused our VSTS builds to start failing. Here’s the message that we got:

1
2016-11-18T19:25:33.9545678Z [Azure Resource Manager]Creating resource group deployment with name WebSiteSQLDatabase-20161118-1925
2016-11-18T19:25:37.8711397Z ##[error]The resource 'myresource-int-myresource-int' already exists in location 'centralus' in resource group 'myresource-int'. A resource with the same name cannot be created in location 'East US'. Please select a new resource name.
2016-11-18T19:25:37.9071426Z ##[section]Finishing: Azure Deployment:Create Or Update Resource Group action on eClientRcgt-int

So I started debugging. After a few days trying to get this issue fixed, I decided to generate the template from the portal. Looked up the myresource-int-myresource-int inside of it and found out that it was an automatically generated name for Microsoft.insights/autoscalesettings. The worse was… its location was Central US. And it was not alone.

Other Alert rules were also located in Central US and just fixing the autoscalesettings would get me other error messages.

Of course, there’s no easy way in the portal to delete those. However, with PowerShell, it’s trivial.

It is important to note that it’s perfectly safe to delete them on our end since we deploy with Azure Resource Manager templates. They will be recreated at the next CI/CD run.

Here’s the quick code to delete them if you encounter this issue.

1
2
3
4
$resource = Get-AzureRmResource
$rgn = 'resourceGroup'
$resource | where { $_.ResourceType -eq 'Microsoft.insights/autoscalesettings' -and $_.Location -eq 'centralus' -and $_.ResourceGroupName -eq $rgn } | Remove-AzureRmResource
$resource | where { $_.ResourceType -eq 'Microsoft.insights/alertrules' -and $_.Location -eq 'centralus' -and $_.ResourceGroupName -eq $rgn } | Remove-AzureRmResource

Cleaning up Azure Resource Manager Deployments in Continuous Integration Scenario

When deploying with Azure Resource Manager Templates (aka: ARM Templates), provisioning an environment has never been that easy.

It’s as simple as providing a JSON file that represent your architecture, another JSON file that contains all the parameters for this architecture and boom. Online you go.

Personally, I hate deploying from Visual Studio for anything but testing. Once you start delivering applications, you want something centralized, sturdy and battle tested. My tool of choice is Visual Studio Team Services. VSTS integrates perfectly with Azure with tasks to Create/Upgrade ARM templates on an Azure Subscription.

Our current setup includes 4 environments and 4-5 developers. One of this environment is a CI/CD environment. Every single check-in that happens in a day will be deployed. So our resource group is also being updated like crazy. Just to give you numbers, 50 deployments in a day hasn’t been unheard of.

The problem, is the Azure Resource Manager Deployment limit.

Azure Resource Manager Deployment Limit

So … 800 eh? Let’s make a calculation. 20 deployments per day, 20 workable day in a week… 400 deployments per months.

2 months. That’s how long before we run into an error when deploying on Azure. I’ve already raised the issue with one of the developers over at Microsoft but in the mid-time, I need to clear this!

Many ways to do this.

The Portal

Oh boy… don’t even think about it. You’ll have to do them one by one. There’s nothing to multi-select. And you’ll need to do that every month/2 months.

Everything that is repeating itself is worth automating.

Powershell - Normal way

I’ve tried running the following command:

1
2
$resourceGroupName = "MyResourceGroup"
Get-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName | Remove-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName

The problem with this is that each deployment is going to be deleted synchronously. You can’t delete in batch. With 800 deployments to clean up, it took me hours to delete a few hundreds before my Azure Login Powershell session expired and crashed on me.

Powershell - The Parallel way

Powershell allows for parallel commands to be run side by side. It run those commands in separate sessions in a separate Powershell process.

When I initially ran this command, I had about 300 deployments to clean on one of my resource group. This, of course, launched 300 powershell.exe processes that executed to required commands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$path = ".\profile.json"
Login-AzureRmAccount
Save-AzureRmProfile -Path $path -Force

$resourceGroupName = "MyResourceGroup"
$deployments = Get-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName
$deploymentsToDelete = $deployments | where { $_.Timestamp -lt ((get-date).AddDays(-7)) }

foreach ($deployment in $deploymentsToDelete) {
Start-Job -ScriptBlock {
param($resourceGroupName, $deploymentName, $path)
Select-AzureRmProfile -Path $path
Remove-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -DeploymentName $deploymentName
} -ArgumentList @($resourceGroupName, $deployment.DeploymentName, $path) | Out-Null
}

Then you have to keep track of them. Did they run? Did they fail? Get-Job will return you the list of all jobs launched in this session. The list may be quite extensive. So let’s keep track only of those running:

1
(Get-Job -State Running).Length

If you want to only see the results of the commands that didn’t complete and clear the rest, here’s how:

1
2
3
4
5
6
7
8
9
10
11
# Waits on all jobs to complete
Get-Job | Wait-Job

# Removes the completed one.
Get-Job -State Completed | Remove-Job

# Output results of all jobs
Get-Job | Receive-Job

# Cleanup
Get-Job | Remove-Job

The results?

Instead of taking overnight and failing on over half the operations, it managed to do all of those in a matter of minutes.

The longest run I had on my machine was 15 minutes for about 300 deployments. Always better than multiple hours.

If you create a Powershell that automate this task on a weekly basis, you won’t have to wait so long. If you include Azure Automation in this? You’re good for a very long time.

Yarn 0.17 is out. Time to ditch npm.

Defenestrating npm. Original: http://trappedinvacancy.deviantart.com/art/Defenestration-115846260

Original by TrappedInVacancy on DeviantArt.

If you were avoiding Yarn because of its tendencies to delete your bower folder, it’s time to install the latest.

Among the many changes, it removes support for bower. So yarn is truly a drop in replacement for npm now.

To upgrade:

1
npm install -g yarn

Ensure that yarn --version returns 0.17. Then run it against your code base by simply typing this:

1
yarn

Only thing you should see is a yarn.lock file.

Wait… why should I care about yarn?

First, yarn freezes your dependency when you first install them. This allows you to avoid upgrading sub-sub-sub-sub-sub-sub-sub-sub dependency that could break your build because someone down the chain didn’t get semver.

The lock file is alphabetically ordered YAML and automatically generated when running yarn. Every time this file change, you know your dependencies changed. As simple as that. Not only that, it also freezes all child dependencies as well. That makes build process repeatable and non-breaking even if someone decides that semver is stupid.

Second, yarn allows for interactive dependency upgrade. Just look at this beauty.

Interactive Upgrade!

Cherry picking your upgrade has never been easier. If include yarn why <PACKAGE NAME> which gives you the reason for a package’s existence, yarn truly allows you to see and manage your dependencies with ease.

Finally, yarn will checksum and cache every packages it downloads. Even better for build servers that always re-install the same packages. Yarn also install/download everything in parallel. Everything to get you fast and secure builds for this special Single Page Application you’ve been building.

If you want the whole sales pitch, you can head read so on Facebook’s announcement’s page.

What about you?

What is your favorite Yarn feature? Have you upgraded yet? Leave me a comment!

Do not miss Microsoft Connect 2016

Whether your focus is on .NET, .NET Core, or Azure…

You do not want to miss Connect 2016.

So what should you watch?

I’m an architect or business oriented

Watch Day 1. All the big announcements are going to be condensed in this day. If you have something to relay to the business or need to know Microsoft’s direction for the foreseeable future? It’s where you’ll get that information.

I’m a developer/architect or just love the technical details

You’ll want to add Day 2 to the mix. That’s where all the cool stuff and how to use it will be shown. I’m expecting lots of cool demos with good deep dives in a few features.

If you are an Azure developer? You’re in luck. There’s at least 2 hours dedicated to Azure without even talking about all the details about VSTS or Big Data.

If you want to put into practice what you’ve seen, hit Day 3 with some Microsoft Virtual Academy.

What should I expect?

As Azure is a continuously evolving platform, I would expect a lot of (small and big) announcements. Service Fabric was launched in Public Preview last year so I would hope to see an update on that.

With Microsoft Graph launching at last year’s Connect and Microsoft Team being launched just recently, I’m hoping to see some integration on those two products.

As for Visual Studio and .NET Core, if you haven’t been following the GitHub repository, something something about the new project system. Nothing hidden here. But I’ll leave some high quality demo blow your mind away. I’m also expecting some news about either a Visual Studio Update or VS15 which has been in Preview for quite a while now (Preview 5 being last October).

For UWP? I’m really hoping to see the future of that platform. Windows Phone is a great platform and the first step for Microsoft in the mobile space. UWP is the unified concept of One Application everywhere. I would love to see their vision there.

What about you?

What are YOU expecting? What do you hope in seeing?

Are your servers really less expensive than the cloud?

I’ve had clients in the past that were less than convinced by the cloud.

Too expensive. Less control and too hard to use.

Control has already been addressed by Troy Hunt before and I feel that it translate very well to this article.

As for the difficulty to use the cloud, it’s seriously just a few clicks away in Visual Studio or a GitHub synchronization.

That leaves us with pricing which can be tricky. Please note that the prices were taken at the time of writing and will change in the future.

Too expensive

On-Premise option

When calculating the Total Cost of Ownership (TCO) of a server, you have to include all direct, indirect and invisible costs that goes into running this server.

First, there’s the server itself. If I go VERY basic with one machine, I can get one for $1300 or $42/months over 36 months with financing. We’re talking bare metal server here. No space or RAM for virtualization. I could probably get something cheap at Best Buy but at that point, it’s consumer-level quality no warranty on business usage. If you are running a web server and a database, you want those to be on two distinct machine.

Then, unless you have the technical knowledge and time to setup a network, configure the machine, secure them you will have to pay someone to pay for this. We’re talking either $100/hours of work or hiring someone who can do the job. Let’s assume you have a consultant come over for 2 days’ worth to help you set that up (16 hours). Let’s add 5 hours per years to deal with updates, upgrades and other.

Once that server is configured and ready to run, I hope you have AC to keep it cool. Otherwise, your 36 months investment may have components break sooner rather than later. If you don’t have it “on-site” but rather rent a space in a colocation area, keep that amount in mind for the next part.

Now let’s do a run down. $1300 per server, 16 hours at $100 per hours (with 5 hours per years to keep the machine up and running), add the electrical cost of running the server, the AC or a colocation rent.

Over 36 months, we’re at $158 per months + electrical / rent. I haven’t included cost of backups or any other redundancies.

Cloud Option

Let’s try to get you something similar on the cloud.

First thing you will need is a database. On Azure, a standard S1 database will run you at $0.403/hours or about $30 per months. Most applications these days require a database so that goes without saying.

Now you need to host your application somewhere. If you want to handle everything from network to Windows Updates, a Standard A1 virtual machine will run you at $0.090/hours or $67 per months.

But let’s be real here. If you are developing an API or a Web Application, let’s drop having to manage everything. Let’s check AppServices. You can get a Basic B1 instance for $0.075/ hours or $56 per months.

What does AppServices brings you however is no Windows Updates. Nothing to maintain. Just make sure your app is running and alive. That’s your only responsibility.

With Azure, running with Virtual Machines will cost you a bit less than $100 per months and AppServices at around $86 per months.

Let’s say I add 100Gb of storage and 100Gb of bandwidth usage per months and here’s what I get.

Estimate for AppServices

Fine prints

If you analyze in details what I’ve thrown at you in terms of pricing, you might see that an On-Premise server is way more powerful than a Cloud VM. That is true. But in my experience, nobody is using 100% of their resources anyway. If you do not agree with me, let’s meet in the comments!

Here’s what’s not included in this pricing. You can be hosted anywhere in the world. You can start your hosting in United States but find out that you have clients in Europe? You can either duplicate your environment or move it to this new region without problems. In fact, you could have one environment per region without huge efforts on your end. Depending on the plans you picked, you can load balance or auto-scale what was deployed on the cloud. Impossible to do with On-Premise without waiting for new machines. Since you can have more than one instances per region and multiple deployment over many regions, your availability becomes way higher than anything you could ever achieve alone. 99.95% isn’t impossible. We’re talking less than 4.5 hours of downtime per year or less than 45 seconds every day. With a physical server, one busted hard-drive will take you down for at least a day.

Take-away notes

That you agree or not with me, one thing to take away is to always include all costs.

Electricity, renting space, server replacement, maintenance, salary, consulting costs, etc. All must be included to fairly compare all options.

Summary

Some things should stay local to your business. From your desktop computers to your router, those need to stay local. But with tools like Office 365, which takes your email/files to the cloud, and Azure, which takes your hosting to the cloud. Less and less elements require a proper local server farm anymore.

Your business or your client’s business can literally run off of somebody else’s hardware.

What do you think? Do you agree with me? Do you disagree? Let me know in the comments.

Atom: Adding Reveal in TreeView with the ReSharper Locate File Shortcut

There’s one feature I just love about ReSharper is the Locate File in Solution Explorer in Visual Studio.

For me, Visual Studio is too heavy for JavaScript development. So I use Atom and I seriously missed this feature.

Until I added it myself. With 2 lines of code declaration.

1
2
'atom-text-editor':
'alt-shift-l': 'tree-view:reveal-active-file'

To add this to your shortcuts, just go in File > Keymap... and paste the above in the opened file.

What other shortcuts are you guys using?

The Lift and Shift Cloud Migration Strategy

When migrating assets to the cloud, there are many ways to do it. Nothing is a silver bullet. Each scenario needs to be adapted to our particular situations.

Among the many ways to migrate current assets to a cloud provider, one of the most common is called Lift and Shift.

Lift and shift

TLDR; You want the benefit of the cloud, but can’t afford the downtime of building for the cloud.

The lift and shift cloud migration strategy are to take your local assets (mostly on VMs or pure VMs) and just move them to the cloud without major changes in your code.

Why lift and shift?

Compared to the alternatives of building for the cloud, called Cloud Native, or refactor huge swath of your application to better use the cloud, it’s easy to see why. It just saves a lot of developer hours.

When to lift and shift?

You want to do a lift and shift when your application architecture is just too complex to be Cloud Native or that you don’t have the time to convert it. Maybe you have a 3rd party software that needs to be installed on a VM or that the application assumes that it controls the whole machine and requires elevation. In those scenarios, where re-architecting big parts of the application are just going to be too expensive (or too time consuming), a lift and shift approach is a good migration strategy.

Caveats

Normally, when going to the cloud, you expect to save lots of money. At least, that’s what everybody promises.

That’s one of the issues with a lift and shift. This isn’t the ideal path to save a huge amount of money quickly. Since you will be billed in terms of resources used/reserved, and if your Virtual Machines are sitting there doing nothing, it would be a good idea to start consolidating a few of them. But besides the cost, the move should be the most simple of all migration strategies.

You’ll still need to manage your Virtual Machine and have them follow your IT practices and patching schedule. Microsoft will not manage those for you. You are 100% in control of your compute.

Gains

It is, however, one way to reach scalability that wasn’t previously available. You are literally removing the risk with handling your own hardware and putting it on your cloud provider. Let’s be honest here, they have way more money in the infrastructure game to ensure that whatever you want to run, it will. They have layers upon layers of redundancy from power to physical security. Data centers have certifications that would cost you millions to get. With Azure, you can get this. Now.

So stop focusing on hardware and redundancy. Start focusing on your business and your company’s goal. Unless you are competing in the same sphere as Microsoft and Amazon, your business isn’t hosting. So let’s focus on what matters. Focus on your customers and how to bring them value.

With some change to the initial architecture, you will also be able to scale out (multiple VMs) your application servers based on a VM’s metric. Or with no changes to the architecture, you can increase (or lower) the power of the VM with just a few click from the Portal or a script. It’s the perfect opportunity to save money by downgrading a barely used Application Server or shutting down unused machines during off-hours.

This can definitely bring you and edge where a competitor would have to build a new server to allow for expanding their infrastructure. You? You get those on demand for how long or how short you need them.

The capability to scale and the reliability of the cloud are the low hanging fruits and will be available to pretty much all other cloud migration strategy and with all cloud providers.

When to re-architecture

Massive databases

We used to put everything in a database. I’ve seen databases in gigabytes or even terabytes territory. That’s huge.

Unless those databases are Data Warehouses, you want to keep your database as slim as possible to save money and gain performance.

If you are storing images and files in an SQL database, it might be time to use Azure Blob Storage. If you have a Data Warehouse, don’t store it in SQL Azure but rather store it in Azure SQL Data Warehouse.

Big Compute

If you are processing an ungodly amount of images, videos or mathematical models on a VM, it’s time to start thinking High-Performance Computing (HPC).

Of course, lift and shifting your VM to the cloud is a nice temporary solution but most likely, you aren’t using those VMs 100% of the time they are up. When they are up, they make take longer to run the task than you might like.

It’s time to check Azure Batch, Cognitive Services (if doing face recognition) or even Media Services if you are doing video encoding.

Those services allow you to scale to ungodly level to match your amount of work. Rather than keeping your VMs dedicated to those workloads, taking the time to refactor the work to be done so that they can better leverage Azure Services will allow you to improve your processing time and reduce the amount of maintenance on those machines.

Small Web Applications

Do you have small web applications with one small database hooked on an IIS Server on your network? Those are an excellent candidate to be moved to an Azure WebApp.

If your application has low traffic, low CPU/memory usage and is sitting with many other similar apps on a VM, they are the perfect scenario for a refactoring to Azure App Services.

Those applications can be put on a Basic App Service Plan and share computing resources. If one of those apps is suddenly more resources intensive, it takes 10 minutes to give it its own computing resources.

Even the database themselves are the perfect case to move to an Azure SQL Elastic Pool where database shares their compute.

With the proper tweaking, it’s possible to gain all the benefits of simple website hosting without having to handle the VMs and their maintenance.

What’s next

There’s many ways to evolve once your old legacy servers are in a VM on Azure.

Going with containers is one of the many solutions. Of course, if you already have containers internally, your lift and shift to Azure should be about as painless as can be. If you are not, it will make your life easier by identifying the dependencies your old applications were using. This would allow you to leverage the cloud reliability and scalability without changing too much.

Without containers, consider Azure Site Recovery to automate replication of VMs on the cloud.

For certain application going Cloud Native is definitely going to save you more money as you adapt your application to only use the resources it needs. By using Blob Storage instead of the file system, NoSQL storage where it makes sense and activating auto scaling, you will be able to save money and run a leaner business.

Do you agree? Are there scenarios that I missed? Please comment bellow!

Which is best? Azure AppService or Azure Cloud Service?

After my previous post, we managed to find out what flavor of App Service to use (resumed at the top of the post by a famous tldr).

But other question I receive is often? Should I use a WebApp (read: AppService) or a Cloud Service? What’s the difference? Just like anything slightly complicated, the answer is “it depends”. I know! It’s a classic obscure response. The reason is that there is no silver bullet. If there is no silver bullet, it means we have to look at your current scenario and try to compare it with use cases that fits the solutions.

So let’s try to dive-in a bit on use cases that warrant a WebApp and on those that would suit a Cloud Service better.

What is an AppService?

An AppService allows you to host a web application (API or UI) very easily within seconds.

If you are looking to iterate quickly, this is your option. Deployments take usually seconds from your machine. AppServices also allow you to quickly scale up (up to 10 instances) and out either by responding to metrics (CPU/Memory/Queue length) or manually through the portal. You can share resources by grouping your web applications within App Service Plans. Applications are hosted on the *.azurewebsites.net domain. Each additional instance that are created to answer to the demand contains a copy of your content and configuration of your application. No need to replicate anything. As we don’t always deploy in production, it’s possible to create a staging environment where you can release a preview build and test it out in the real production environment for vetting.

AppServices is PaaS so no need to handle Windows Updates or any kind of patching. If you don’t want to deploy with Visual Studio, you can do deployment with GitHub, Dropbox , FTP or Web Deploy.

All applications requires to run some jobs on schedule or to answer to events from a queue. In those cases, WebJobs can be easily deployed and share the resources of the current web application.

Supports applications written in .NET, Node, PHP, Python, Java.

What is a Cloud Service?

A Cloud Service allows you to host anything on a compute resource running Windows that can scale to hundreds of machines.

If you are looking for an application that can take almost any type of traffic/load, this is your option. Deployment are longer. They will usually take a few minutes to deploy new releases. Just like AppServices, it does support multiple staging environment and allow you to scale up to tremendous amounts (up to 1000 instances) by responding to metrics (CPU/Memory/others) or just manually through the portal.

You can host 25 roles (Web/Worker) per service and independently scale them each to 1000 instances. If you want a rolling deployment instead of “all at once”, Cloud Service is where it shines. It can deploy to each instance gradually instead of all instances at once unlike AppServices.

Need to Remote Desktop into the Cloud Service to see what is going on? Totally possible.

Cloud Service is a PaaS service and will receive all OS patches automatically and, if configured properly, without any down time.

However, it does not do GitHub deployment, it will not automatically integrate APIs with Logic Apps or Biztalk Services.

When do I want to use an AppService?

When you want to…

  • Enjoy fast deployments from many sources
  • Creating web application that are not too resource hungry
  • Creating APIs that need to integrate with Logic Apps
  • Create a minimum viable product
  • You are using .NET, Node, Python or Java
  • Run maintenance jobs
  • Limit your expenses and save money

Common scenarios are…

  • Hosting a blog
  • Hosting an e-commerce
  • Hosting a public company website
  • Creating any application for a client (Note: I always start with AppServices)
  • Run small jobs on a specific schedule or upon requests (with or without web hooks)
    • Polling data feeds
    • Cleaning up SQL databases
    • Send notifications

When do I want to use a Cloud Service?

When you want to…

  • Create powerful applications that can scale to massive numbers
  • Handle massive amount of traffic
  • Independently scale background processes from the front-end
  • Run non-trivial background task
  • Remote desktop onto the machine
  • High amount of control over the configuration
  • Ensure a proper separation between your instances/roles running your application (including security reasons)

Common scenarios are…

  • Running specific tasks that require Windows APIs that are not available in AppServices (GDI+, COM, etc.)
  • Hosting Fortune 500 e-Commerce
  • Running parallelizable CPU/memory intensive tasks
  • Running long running tasks
  • Running code with elevated privilege
  • Installing custom software on the VM (framework, compilers, others)
  • Was running on AppServices but exceeded the capabilities of the service
  • Moving some legacy applications from a home data center to the cloud

Conclusion

In the end, it’s all about what you need.

Most of the time, clients don’t need more than AppServices to start deploying applications and see what Azure can bring to the table. When the scenario requires it, it’s our job to orient the technological choice toward the right solution.

While AppServices may suffice for 80% of a customer’s need, the other 20% will have you dig your hands into Cloud Services or even Azure Batches (more on that later!).

I hope that I managed to help you make a decision today. If you have questions, do not hesitate to comment!

Azure WebApp vs Azure API App

tldr; No differences under the hood. Only different icons, name and an API Definition that is populated. All App Services features are still available to you.

If it’s your first time in the Azure ecosystem, you must be wondering what is the difference between a WebApp and an API app. Which one should I choose?

Differences between Azure WebApp and Azure API App

Most of the differences are pretty much in the naming, the icons, and the tooling.

Features of one are available in the others. There is absolutely no differences beside icons and names on the Azure Portal. Your initial choice doesn’t impact you as much as it once did.

Let’s take a sample web app (created as an Azure WebApp) that is in my Azure Subscription. Here’s what is displayed once I get in the menu.

Mobile and Api menu

Both Api and Mobile options are available. Where’s the other differences? Mainly in the tooling. Some elements of the tooling is only going to work if you have an API definition (see above) but having an API Definition is not exclusive to an API App.

The API Definition is a link to your Swagger 2.0 API description. When publishing an API Service from Visual Studio, this field is going to be set automatically but can be pretty much anything you want.

Once you have an API defined, multiple other scenarios opens up like exposing your APIs through logic apps or BizTalk services, using API Management, or generating an API client from Visual Studio. But at its core? It’s still an App Service.

Generating a Client from a WebApp API Definition

When I right click Add > REST Api Client... in a Console Application in Visual Studio 2015, I’m shown the following screen.

Add REST Client

Clicking Select Azure Asset... will bring you this window.

Select Azure Asset

As you can see, there’s no API present. What happens if I publish a web app and add the API Definition after?

Set API Definition

I’ll close the Azure Asset Selector and refresh it.

Select Azure Asset

Summary

There once was a disconnect between Azure WebApp and API App. Today? The only difference is which icon/name you want that app to be flagged with. Otherwise? All features available for one is available to the other.

So go ahead. Create an app. Whether it’s an API, a Website, or an hybrid doesn’t matter. You’ll get work done and deployed just as easily.

Creating .NET Core Console WebJobs on an ASP.NET Core Azure WebApp

Code tested on .NET Core RTM 1.0.1 with Tooling Preview2 and Azure SDK 2.9.5

Introduction

I love WebJobs. You deploy a website, you need a few tasks to be run on the side to ensure that everything is running smoothly or maybe un-queue some messages and do some actions.

How to do it in ASP.NET 4.6.2? Easy. Visual Studio is throwing you menus, wizards, blog posts and what not at you to help you start using it. The tooling is curated to make you fall into a pit of success.

With .NET Core tooling still being in preview, let’s just say that there’s no menus nor wizards to help you out. I haven’t seen anything online that help you automate the “Publish“ experience.

The basics - How do they even work?

WebJobs are easy. If you forget the Visual Studio tooling for a moment, WebJobs are run by Kudu.

Kudu is the engine behind most of Azure WebApps experience from deployments to WebJobs. So how does Kudu detect that you have WebJobs to run? Well, everything you need to know is on their wiki.

Basically, you will have a command file named run (with various supported extensions) located somewhere like this:
app_data/jobs/{Triggered|Continous}/{jobName}/run.{cmd|ps1|fsx|py|php|sh|js}

If that file is present? Bam. WebJob created with the jobName. It will automatically show in your Azure Portal.

Let’s take this run.cmd for example:

1
2
3
@echo off

echo "Hello Azure!"

Setting a schedule

The simplest way is to create a CRON schedule. This can be done by creating a settings.job file. This file is basically all the WebJobs options. One of them is called schedule.

If you want to trigger a job every hour, just copy/paste the following in your settings.job file.

1
2
3
{
"schedule": "0 0 * * * *"
}

What we should have right now

  1. One ASP.NET Core application project ready to publish
    • A run.cmd file and a settings.job file under /app_data/jobs/Triggered/MyWebJob
  2. One .NET Core Console Application

To make sure it runs, we need to ensure that we publish the app_data folder. So add/merge the following section to your project.json:

1
2
3
4
5
6
7
{
"publishOptions": {
"include": [
"app_data/jobs/**/*.*"
]
}
,

}

If you deploy this right now, your web job will run every hour and echo Hello Azure!.

Let’s make it useful.

Publishing a Console Application as a WebJobs

First let’s change run.cmd.

1
2
3
@echo off

MyWebJob.exe

Or if your application doesn’t specify a runtimes section in your project.json:

1
2
@echo off
dotnet MyWebJob.dll

The last part is where the fun begins. Since we are publishing the WebApp, those executable do not exist. We need to ensure that they are created.

There’s a section in your project.json that runs right after publishing the ASP.NET Core application.

Let’s publish our WebJob directly into the proper app_data folder.

1
2
3
4
5
6
7
{
"scripts": {
"postpublish": [
"dotnet publish ..\\MyWebJob\\ -o %publish:OutputPath%\\app_data\\jobs\\Triggered\\MyWebJob\\"
]
}

}

Final Result

Congratulation! You now have a .NET Core WebJob published in your ASP.NET Core application on Azure that runs every hour.

All those console application are of course runnable directly locally or through the Azure scheduled trigger.

If this post helped, let me know in the comments!

Ajax requests in a loop and other scoping problems with JavaScript

In the current code I’m working on, I had to iterate over an array and execute an Ajax request for each of these elements.

Then, I had to do an action on that element once the Ajax request resolved.

Let me show you what it looked like at first.

The code

1
2
3
4
5
6
7
8
9
10
function Sample() {
var products = [{name: "first product"}, {name: "second product"}, {name: "third product"}];

for(var i = 0; i < products.length; i++) {
var product = products[i];
$.get("https://api.github.com").then(function(){
console.log(product.name);
});
}
}

NOTE: I’m targeting a GitHub API for demo purposes only. Nothing to do with the actual project.

So nothing is especially bad. Nothing is flagged in JSHint and I’m expecting to have the product names displayed sequentially.

Expected output

1
first product
second product
third product

Commit and deploy, right? Well, no.

Here’s what I got instead.

Actual output

1
third product
third product
third product

What the hell is going here?

Explanation of the issue

First, everything has to do with scope. var are scoped to the closest function definition or the global scope depending on where the code is being executed. In our example, var is scoped to a function above the for loop.

Then, we have the fact that variables can be declared multiple times and only the last value is taken into account. So every time we loop, we redefine product to be the current instance of products[i].

By the time an Http request comes back, product has already been (re-)defined 3 times and it only takes into account the last value.

Here’s a quick timeline:

  1. Start loop
  2. Declare product and initialize with products[0]
  3. Start http request 1.
  4. Declare product and initialize with products[1]
  5. Start http request 2.
  6. Declare product and initialize with products[2]
  7. Start http request 3.
  8. Resolve HTTP Request 1
  9. Resolve HTTP Request 2
  10. Resolve HTTP Request 3

HTTP requests are asynchronous slow operations and will only resolve after the local code is finished executing. The side-effect is that our product has been redefined by the time the first request comes back.

Ouch. We need to fix that.

Fixing the issue the old way

If you are coding for the browser in 2016, you want to use closures. Basically, passing the current value in a function that is executed when defined. That function will return the appropriate function to execute. That solves your scope issue.

1
2
3
4
5
6
7
8
9
10
11
12
function Sample() {
var products = [{name: "first product"}, {name: "second product"}, {name: "third product"}];

for(var i = 0; i < products.length; i++) {
var product = products[i];
$.get("https://api.github.com").then(function(product){
return function(){
console.log(product.name);
}
}(product));
}
}

Fixing the issue the new way

If you are using a transpiler like BabelJS, you might want to use ES6 with let variable instead.

Their scoping is different and way more sane than their var equivalent.

You can see on BabelJS and TypeScript that the actual problem was resolved in a similar way.

1
2
3
4
5
6
7
8
9
10
function Sample() {
var products = [{name: "first product"}, {name: "second product"}, {name: "third product"}];

for(var i = 0; i < products.length; i++) {
let product = products[i];
$.get("https://api.github.com").then(function(){
console.log(product.name);
});
}
}

Time for me to use a transpiler?

I don’t know if, for me, it’s the straw that will break the camel’s back. I’m really starting to consider using a transpiler that will make our code more readable and less buggy.

This is definitely going on my TODO list.

What about you guys? Have you encountered bugs that would not have happened with a transpiler? Leave a comment!

Extracting your Toggl data to Azure Blob Storage

As a self-employed developer, I use multiple tools to enable me to easily do the boring job. One of those boring job is invoicing.

I normally invoice monthly and I use 2 tools for this. Toggl for time tracking and Excel to generate my invoice.

Yeah. I use Excel. It’s easy and it works. The downside is that I have to do a lot of copy/paste. Extract data from Toggl, import it into Excel, replace the invoice numbers, generate the PDF, export to Dropbox for safekeeping, etc.

So, I’ve decided to try to automate the most of those task for the lowest possible cost. My first step will be to extract the raw data for which I bill my clients.

If you are interested, you can find the solution on GitHub. The project name is TogglImporter.

Here’s the Program.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
using System;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.PlatformAbstractions;

namespace TogglImporter
{
public class Program
{
public static void Main(string[] args)
{

Task.WaitAll(Run(args));
}

public static async Task Run(string [] args)
{

Console.WriteLine("Starting Toggl Import...");
var configuration = new ConfigurationBuilder()
.SetBasePath(PlatformServices.Default.Application.ApplicationBasePath)
.AddCommandLine(args)
.AddJsonFile("appsettings.json")
.AddJsonFile("appsettings.production.json", true)
.Build();


var queries = new Queries(configuration["togglApiKey"]);
var storage = new CloudStorage(configuration["storageAccount"], "toggl-rawdata");
Console.WriteLine("Initializing storage...");
await storage.InitializeAsync();

Console.WriteLine("Saving workspaces to storage...");
var workspaces = await queries.GetWorkspacesAsync();
var novawebWorkspace = workspaces.First(x => x.Name == "Novaweb");
await storage.SaveWorkspace(novawebWorkspace);
await Task.Delay(250);

Console.WriteLine("Saving clients to storage...");
var clients = await queries.GetWorkspaceClientsAsync(novawebWorkspace.Id);
await storage.SaveClients(clients);
await Task.Delay(250);

Console.WriteLine("Saving projects to storage...");
var projects = await queries.GetWorkspaceProjectsAsync(novawebWorkspace.Id);
await storage.SaveProjects(projects);
await Task.Delay(250);

Console.WriteLine("Saving time entries to storage...");
const int monthsToRetrieve = 2;
for (int monthsAgo = 0; monthsAgo > -monthsToRetrieve; monthsAgo--)
{
var timeEntries = await queries.GetAllTimeEntriesForXMonthsAgoAsync(monthsAgo);
await storage.SaveTimeEntriesAsync(DateTime.UtcNow.AddMonths(monthsAgo), timeEntries);
await Task.Delay(500);
}

Console.WriteLine("Toggl import completed.");
}
}
}

What’s happening?

First, I have an object called Queries that will take a Toggl API Key and will query their API. Simple HTTP Client requests that returns objects that are already deserialized.

Then I have an object called CloudStorage that will store those objects to specific area in the cloud. Those will enforce folder hierarchies and naming.

Finally, I delay after each request to ensure I’m not overloading their API. The last thing you want to do, is having them shut it all down.

What’s being exported?

I export my Workspace, Clients, Projects as well as every time entries available for the last 2 months. I do those in multiple requests because this specific API will limit the amount of entries to 1000 and does not support paging.

If any of this can be helpful, let me know in the comments.

Localizing Auth0 Single Sign-on text for Lock v9.2.2

In the current mandate, we are using Auth0 Lock UI v9.2.2.

There happen to be a bug with the widget where the following text is hardcoded.

Single Sign-on Enabled

To fix this issue in our AngularJS Single Page Application, we had to introduce this in the index.html file.

1
2
3
4
5
6
7
8
9
10
<style ng-if="myLanguage == '<LANG HERE>'">
.a0-sso-notice::before{
font-size: 10px;
content: '<YOUR TEXT HERE>';
}

.a0-sso-notice{
/* hack to hide the text */
font-size: 0px !important;
}

</style>

This piece of code fixes the issue when a specific language is specified. ng-if will completely remove the tag when the language won’t match and monkey patch the text.

If you have more than 2 languages, it would pay to consider injecting the text directly within the style tag. Since Angular doesn’t allow you to parse it, somebody else already documented how to do it.

Publishing an App Service linked WebJob through Azure Resource Manager and Visual Studio Team Services

Assumptions

I assume that you already have a workflow with Visual Studio Team Services that deploy your Web Application as an App Service.

I assume that you are also deploying with an Azure Resource Group project that will automatically provision your application.

Creating the WebJobs

Well… that part is easy. You right click on the application that will be hosting your WebJob and you select Add > New Azure WebJob Project.

New Azure WebJobProject

Once you click on it, you should see the following Wizard. At that point, you have a wide variety of choices on how and when to run your WebJob. Personally, I was looking for a way to run tasks on a schedule every hours forever.

You could have a Continous job that will respond to trigger, one time jobs, recurring jobs with and end date. It’s totally up to you at that point.

New Azure WebJobProject

Once you press OK, a new projet will be created and your Web App will have a new file named webjobs-list.json added under Properties. Let’s look at it.

1
2
3
4
5
6
7
8
{
"$schema": "http://schemastore.org/schemas/json/webjobs-list.json",
"WebJobs": [
{
"filePath": "../MyWebJob/MyWebJob.csproj"
}

]
}

That is the link between your WebApp and your WebJobs. This is what will tell the build process on VSTS that it also needs to package the WebJob with this website so that Azure can recognize that it has WebJobs to execute.

Configuring the WebJob

Scheduling

By default, you will have a webjob-publish-settings.json created for you. It will contain everything you need to run the task at the set interval.

However, the Program.cs should look like this for task that should be run on a schedule.

1
2
3
4
5
6
7
8
9
public class Program
{
static void Main()
{

var host = new JobHost();
host.Call(typeof(Functions).GetMethod("MyTaskToRun"));
//host.RunAndBlock();
}
}

If the RunAndBlock command is present, the process will be kept alive and the task won’t be able to start at the next scheduled run. So remove it.

As for the code itself, flag it with NoAutomaticTrigger.

1
2
3
4
5
6
7
8
public class Functions
{
[NoAutomaticTrigger]
public static void MyTaskToRun()
{

Console.WriteLine("Test test test");
}
}

Continuous

This is the perfect mode when you want to respond to events like new message being sent from the Queue.

In this scenario, ensure that you have RunAndBlock. This is a polling mechanism and if your WebJob isn’t running, it’s not running events.

1
2
3
4
5
6
7
8
public class Program
{
static void Main()
{

var host = new JobHost();
host.RunAndBlock();
}
}
1
2
3
4
5
6
7
public class Functions
{
public static void ProcessMessage([QueueTrigger("MyQueue")] string message, TextWriter log)
{

//TODO: process message
}
}

Enabling the Logs with your Storage Account through ARM templates

That’s the last part that needs to make sure your WebJob is running properly.

By default, they are running part in the same instance as your website. You need a way to configure your storage account within your ARM template. You really don’t want to hardcode your Storage account within your WebJob anyway.

So you just need to add this section under your WebApp definition to configure your connection strings properly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[concat('Microsoft.Web/Sites/', variables('myWebApp'))]"
],

"properties": {
"AzureWebJobsDashboard": {
"value": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('MyStorageAccount'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('MyStorageAccount')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
"type": "Custom"
}
,

"AzureWebJobsStorage": {
"value": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('MyStorageAccount'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('MyStorageAccount')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
"type": "Custom"
}

}

}]

}

Remove the X from Internet Explorer and Chrome input type search

When you have a some input with type="search", typing some content will display an X to allow you to clear the content of the box.

1
<input type="search" />

That X is not part of Bootstrap or any other CSS framework. It’s built-in the browser. To be more precise, Chrome and IE10+.

The only way to remove it is to apply something like this:

1
2
3
4
5
6
7
8
9
/* clears the 'X' from Internet Explorer */
input[type=search]::-ms-clear { display: none; width : 0; height: 0; }
input[type=search]::-ms-reveal { display: none; width : 0; height: 0; }

/* clears the 'X' from Chrome */
input[type="search"]::-webkit-search-decoration,
input[type="search"]::-webkit-search-cancel-button,
input[type="search"]::-webkit-search-results-button,
input[type="search"]::-webkit-search-results-decoration { display: none; }

The width/height on the Internet Explorer code is to ensure that no space is kept for the component. Otherwise, if you type text long enough, the content may be hidden under the hidden X.

That’s it. Copy/paste that in our main CSS file and all search box won’t have that annoying X anymore.

Shooting yourself in the foot with C# Tasks - ContinueWith

Accidentally swallowing exceptions with C# async ContinueWith() is a real possibility.

I’ve accidentally done the same recently on a project.

The idea was that once a method finished running, I wanted to execute a log of the task that was executed.

So my Controller Action looked something like this:

1
2
3
4
5
6
7
[HttpDelete]
public async Task<HttpResponseMessage> DeleteSomething(int id)
{

await _repository.DeleteSomething(id)
.ContinueWith(task => _log.Log(task));
return Request.CreateResponse(HttpStatusCode.OK);
}

The delete would go and run the query against the database and delete some records in Azure Table Storage.

The Log method would just read the object resulting from the task completing and finish.

What would happen when DeleteSomething throws an exception? ContinueWith would get passed the faulted Task and if you didn’t retrow at that point, it would go on and return an HTTP 200.

Wow. That’s bad. It’s like a highly sophisticated On Error Resume Next. Welcome to 1995.

Let’s fix this. I’m expecting to run this Task only when it succeed. So let’s make sure I use the right overload.

1
2
3
4
5
6
7
[HttpDelete]
public async Task<HttpResponseMessage> DeleteSomething(int id)
{

await _repository.DeleteSomething(id)
.ContinueWith(task => _log.Log(task), TaskContinuationOptions.OnlyOnRanToCompletion);
return Request.CreateResponse(HttpStatusCode.OK);
}

If I keep doing stupid stuff like this, I might turn this into a series of what NOT to do.

If you encountered something similar, please let me know in the comments!

Backup and restore your Atom installed packages

So you are using Atom and you start installing plugins. Everything works nice and you have your environment with just the right packages.

Suddenly, your hard drive crash or maybe your whole computer burn down. Or worse, you HAVE to use a client’s computer to do your work.

So you have to reconfigure your environment. How will you recover all your packages and preferences?

First, as soon as you have your packages in a good state, run the first command to backup your installed packages.

Backing up your Atom Packages

Run this command:

1
apm list --installed --bare > atomPackages.txt

Will output something like this:

1
angular-jonpapa-snippets@0.7.0
angularjs@0.3.4
atom-beautify@0.29.10

Restoring your Atom Packages

To restore those packages, run the following command:

1
apm install --packages-file .\atomPackages.txt

Will give you an output like this:

1
Installing angular-jonpapa-snippets@0.7.0 to C:\Users\XXX\.atom\packages done
Installing angularjs@0.3.4 to C:\Users\XXX\.atom\packages done
Installing atom-beautify@0.29.10 to C:\Users\XXX\.atom\packages done

What’s apm?

apm stands for Atom Package Manager. See it as a something similar to npm for node but for the Atom editor instead. You can do pretty much anything you want to Atom with apm.

What’s left?

The only we are not backing up at this point is your snippets, custom styles, themes and your keymaps.

If you are interested, let me know and I’ll show you how to back those up too.

Protecting your ASP.NET Identity passwords with bcrypt and scrypt

Most people nowadays use sort of authentication mechanism when coding a new website. Some people are connected directly with Active Directory, others use Social login like Google, Facebook or Twitter.

Then there is all those enterprise/edge case customers that don’t have an SSO (or can’t have one) and still require users to create an account and pick a password.

In those scenarios, you don’t want to end up on Troy Hunt‘s infamous list of Have I been pwned?. If you do, you want the maximum amount of time so that you can change your password everywhere. We still do too much password reuse. It’s bad but it’s not going away anytime soon.

So how do you delay? First, do not store your password in clear text. Then, hash/salt them. But which hashing algorithm should you use?

Most of .NET (pre-core) suggested MD5/SHA1 as a default hashing mechanism which is highly unsafe. In .NET Core, the default implementation is PBKDF2 which is a hundred times better. However, unless you require FIPS certification, is not exactly safe either.

Slower algorithms

PBKDF2 is part of a family of algorithms that allow you to configure work factors at the moment of encoding the password. PBKDF2 allow you to set the amount of iterations that the hash must run before the hash is returned.

But given enough CPU/memory, this can be cracked faster each year.

There come bcrypt and scrypt.

BCrypt, like PBKDF2, allow you to set a work factor that will make the CPU run more heavily to generate a single hash. This makes brute forcing algorithms slower to run. However, with GPU hashing, those limitations are less and less of a restriction.

SCrypt on the other hand, allows you to set the memory usage. Making generating a lot of password a very memory and CPU intensive process. This makes GPU hashing way harder for that specific algorithm.

Which one do I choose?

If you need FIPS certification? PBKDF2. Otherwise, scrypt is the way to go.

PBKDF2 should be configured to at least 10,000 iterations. Scrypt should be configured so that a server is responsive enough for users to log in. That means less than a second to login. Many parameters are offered and will need to be tweaked to your current hardware.

Please not that I am not a mathematician and I can’t explain to you why one is better than the other. I’m just relaying what was suggested on OWASP - Password Storage Cheat Sheet.

How do I install those algorithm within ASP.NET Identity?

I have provided two different sample implementation for bcrypt and scrypt that will replace the default ASP.NET Core Identity IPasswordHasher.

I have also included the ASP.NET Core compatible package to use when you want to install this package.

BCrypt

1
Install-Package BCrypt.Net-Core

This package was created by Stephen Donaghy as a direct port of bcrypt. Source on GitHub.

Startup.cs:

1
2
3
4
5
public void ConfigureServices(IServiceCollection services)
{

// ...
services.AddTransient<IPasswordHasher<ApplicationUser>, BCryptPasswordHasher>();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class BCryptPasswordHasher : IPasswordHasher<ApplicationUser>
{
public string HashPassword(ApplicationUser user, string password)
{
return BCrypt.Net.BCrypt.HashPassword(SaltPassword(user, password), 10);
}

public PasswordVerificationResult VerifyHashedPassword(ApplicationUser user, string hashedPassword, string providedPassword)
{
if (BCrypt.Net.BCrypt.Verify(SaltPassword(user, providedPassword), hashedPassword))
return PasswordVerificationResult.Success;

return PasswordVerificationResult.Failed;
}

private string SaltPassword(ApplicationUser user, string password)
{
//TODO: salt password
}
}

SCrypt

1
Install-Package Scrypt.NETCore

Startup.cs:

1
2
3
4
5
public void ConfigureServices(IServiceCollection services)
{

// ...
services.AddTransient<IPasswordHasher<ApplicationUser>, SCryptPasswordHasher>();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public class SCryptPasswordHasher : IPasswordHasher<ApplicationUser>
{
private readonly ScryptEncoder _encoder;

public SCryptPasswordHasher()
{
_encoder = new ScryptEncoder(2 ^ 16, 8, 1);
}

public string HashPassword(ApplicationUser user, string password)
{
return _encoder.Encode(SaltPassword(user, password));
}

public PasswordVerificationResult VerifyHashedPassword(ApplicationUser user, string hashedPassword, string providedPassword)
{
if (_encoder.Compare(SaltPassword(user, providedPassword), hashedPassword))
return PasswordVerificationResult.Success;

return PasswordVerificationResult.Failed;
}

private string SaltPassword(ApplicationUser user, string password)
{
//TODO: salt password
}
}

More reading

Read more on :

Increasing your website security on IIS with HTTP headers

UPDATE

And within 5 minutes of this post being published, Niall Merrigan just threw a wrench in my wheels.

All of the following can easily be applied by simply installing NWebSec written by André N. Klingsheim. Checkout the Getting Started page to install it right now.

However, if you are not running the ASP.NET Pipeline (old or new), those recommendation still applies.

If you guys are aware of any library that can replace applying those manually, please let me know and I’ll update this post.

HTTP Strict Transport Security (HSTS)

What is it?

HSTS is a policy integrated within your browser that ensure that no protocol downgrade happens.

That means going from HTTPS > HTTP or in the case where the certificate is not valid, disallowing the page to load all together.

So let’s say I type http://www.securewebsite.com in my address bar, the browser will automatically replace http:// by https://.

Let’s say now that the certificate was replaced by a man in the middle attack, it will simply show an error page without allowing you to skip that page.

How do I implement it?

This consist in sending the header Strict-Transport-Security with a max-age value in seconds.

This would enforce the policy for 1 year, will force all subdomains to be HTTPS and enable you to be on the preloaded list:

Strict-Transport-Security: max-age=31536000; includeSubdomains; preload

NOTE: Be careful about the preload list. Once you are on it, you are going to be there for a long time. There would be no expiry. If that preload flag is present, anyone can submit your domain to be on the list. It will not be added automatically, but once it’s done… you’re in. It may take months to be taken off. Read here for more details about removal.

With IIS and its web.config, we force HTTPS urls and we automatically flag the request with the right header.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="HTTP to HTTPS redirect" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{HTTPS}" pattern="off" ignoreCase="true" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="Permanent" />

</rule>
</rules>
<outboundRules>
<rule name="Add Strict-Transport-Security when HTTPS" enabled="true">
<match serverVariable="RESPONSE_Strict_Transport_Security"
pattern=".*" />

<conditions>
<add input="{HTTPS}" pattern="on" ignoreCase="true" />
</conditions>
<action type="Rewrite" value="max-age=31536000; includeSubdomains" />
</rule>
</outboundRules>
</rewrite>
</system.webServer>
</configuration>

Limits

I could talk about the limits but Troy Hunt does an infinitely better job of explaining it than I do.

Also, please be aware that all latest versions of any modern browsers support this. However, IE10 and less will not protect you from these types of attacks.

IE11 and edge are implementing this security feature.

X-Frame-Options

What a user is trying to display your website within an iframe? Most of the time, this is not a desired nor a tested scenario. At worse, it’s just another attack vector.

Let’s block it.

What is it?

X-Frame-Options allow you to set whether a domain is allowed or not to display your content within an iframe. Since nobody uses this technology today, we can either disable it or restrict it to the same domain as the requesting site.

It protects you from attacks called clickjacking.

How do I implement it?

1
X-Frame-Options: [deny|sameorigin]

If you are using IIS, you can simply include this in its configuration.

1
2
3
4
5
6
7
8
9
10
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="X-Frame-Options" value="DENY" />
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>

Please note that deny will simply disallow any iframe from being used whether they come from your site, a subdomain or anywhere. If you want to allow same origin iframes, you have to replace DENY by sameorigin.

X-XSS-Protection

Certain browsers have a security mechanism that detects when a XSS attack) is trying to take place.

When that happens, we want the page to be blocked and to not sanitize the content.

What is it?

This is a security feature that was first built within IE8. It was then brought into all Webkit browsers (Chrome & Safari). Each have their own criteria about what is an XSS attack but each will use that header to activate/deactivate/configure that option.

How do I implement it?

1
X-XSS-Protection: 1; mode=block

If you are using IIS, you can simply include this in its configuration.

1
2
3
4
5
6
7
8
9
10
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="X-XSS-Protection" value="1; mode=block" />
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>

Content-Security-Policy

Let’s say I’m using a library in my project. This sprint, we update this library to the latest version, test it and everything seems to work. We push it to production and bam. Suddenly, our users are being compromised. What you didn’t know is that plugin was compromised a month ago. It loaded an external script and ran its script like it was coming from your own website.

What is it?

This header prevents most Cross Site Scripting attacks by controlling from where script, css, plugins, etc. can actually be run from.

How do I implement it?

This one requires careful tweaking. There is literally tons of options to define it.

The most basic setting you can do is this:

1
Content-Security-Policy: script-src 'self;'

This will restrict all JavaScript files to only come from your own domain. If you are using Google Analytics, you would need to add that domain. Like so:

1
Content-Security-Policy: script-src 'self www.google-analytics.com ajax.googleapis.com;'

A good default to start with?

1
Content-Security-Policy: default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';

From that point, test your site with the console open. Check which domains are being blocked, and white list them. You will see messages like these in the console:

Refused to load the script ‘http://…’ because it violates the following Content Security Policy directive: “…”.

As always, here’s the IIS version to implement it with the .config file.

1
2
3
4
5
6
7
8
9
10
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Content-Security-Policy" value="default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" />
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>

Cross Platform PDF generation with ASP.NET Core and PhantomJS

That post will be short but it’s worth it. One of the easiest way I found to statically generate reports from .NET Core is to simply not to use ASP.NET Core for rendering PDF.

At the current state of affair, most libraries aren’t up to date and may take a bit of time before they are ready to go. So how do you generate PDF without involving other Nuget packages?

PhantomJS

What to render with ASP.NET Core?

Let’s take an example like invoices. I could create an internal-only MVC website that have only one unsecure controller that renders invoices in HMTL on specific URLs like /Invoice/INV000001.

Once you can render one invoice per page, you have 99% of the work done. What remains is to generate the PDF from that HTML.

How does it work with PhantomJS

By using scripts like rasterize.js to interface with phantomjs, you can easily create PDF files in no time.

1
phantomjs rasterize.js http://localhost:5000/Invoice/INV000001 INV000001.pdf

And that’s it. You now have a PDF. The only thing left is to generate the list of URLs and associated filenames for the invoices you want to generate and run that list against that script.

It could even be part of an generation flow where a message could be put on a queue to generate those invoice asynchronously.

1
2
3
4
{
'invoiceId': 'INV000001',
'filename': 'INV000001.pdf'
}

From there, we could have a swarm of processes that just runs the phantomjs script and generate the proper invoice at the proper destination like a blob storage.

Cross-Platform concerns

The best part about this process is that PhantomJS is available on Linux, OSX as well as Windows. With ASP.NET Core also being available on all these platforms, you currently have a cross-platform solution that will meet most of your requirements in term of cross-platform needs.

Even better, this scenario works very well on Azure. By opting for an asynchronous flow, we allow our operation to scale better and allow ourselves to slice our operations to a more maintainable size.

If you have opinion on the matter, please leave a comment!