With the blazing fast internet speeds at work (and at home) it might be hard to imagine how sites would work for people with a lower bandwidth. When developing for the Web, it is good to keep in mind that people from various regions across the globe might access your site. Internet speeds are not that fast around the world. So it is essential to test how your website performs for people on a lower bandwidth.

There are a lot of plugins and external tools that can simulate a lower bandwidth scenario and makes testing on slower bandwidth easier. If you are using Google Chrome, then you already have one such tool under your belt. The Network tab under Google Chrome Developer Tools has an option to set different bandwidth profiles. Setting this to a profile that you want, and launching the site forces the site load on the set bandwidth.

Google Chrome Developer Tools - Network Delay

There are many default bandwidth profiles in there, and it also allows you to add custom profiles if required. There is also an option to simulate offline mode, to test how the application behaves without an internet connection (if that applies to you). However such testing is mostly restricted to manual testing, and for automating this, you might have to look for external tools.

Hope this helps optimize your site for lower bandwidth!

At one of my clients, we faced a strange issue recently. The Azure Web application restarted automatically very often. The event log in the Kudu console showed the below error message.

2017-07-13 00:09:50,333 [P45516/D4/T171] INFO Umbraco.Core.UmbracoApplicationBase - Application shutdown. Details: HostingEnvironment

_shutDownMessage=Directory rename change notification for ’D:\home\site\wwwroot’.
Overwhelming Change Notification in wwwroot
HostingEnvironment initiated shutdown
Directory rename change notification for ’D:\home\site\wwwroot’.
Overwhelming Change Notification in wwwroot
Initialization Error
HostingEnvironment caused shutdown

As you can tell from the logs, the website is an Umbraco CMS hosted as an Azure Web application. We noticed that the restarts were happening more when the content was getting updated through backoffice. The error also states that the restart was caused due to Overwhelming Change Notification in wwwroot. This hints that there are changes that are happening under the wwwroot folder, where the site is hosted.

Even though this post details on why the specific site on Umbraco was restarting, most of the contents are still applicable for any other ASP.NET MVC application.

fcnMode Configuration

A quick search got me to the fcnMode.aspx)setting under httpRuntime.aspx) section. An ASP.net application monitors certain files and folders under the wwwroot folder and will restart the application domain whenever it detects changes. This likely look looks the reason why the web site is restarting.

The fcnMode enumeration.aspx) can take one of the four values below. For an Umbraco application this is by default set to Single.

  • Default: For each subdirectory, the application creates an object that monitors the subdirectory. This is the default behavior.
  • Disabled: File change notification is disabled.
  • NotSet: File change notification is not set, so the application creates an object that monitors each subdirectory. This is the default behavior.
  • Single: The application creates one object to monitor the main directory and uses this object to monitor each subdirectory.
fcnMode set to Single for Umbraco application
1
2
3
4
5
6
7
8
9
10
<system.web>
    ...
    <httpRuntime
        requestValidationMode="2.0"
        enableVersionHeader="false"
        targetFramework="4.5"
        maxRequestLength="51200"
        fcnMode="Single" />
    ...
<system.web>

FCNMode creates a monitor object with a buffer size of 4KB for each folder. When FCNMode is set to Single, a single monitor object is created with a buffer size of 64KB. When there are file changes, the buffer is filled with file change information. If the buffer gets overwhelmed with too many file change notifications an “Overwhelming File Change Notifications” error will occur and the app domain will recycle. The likelihood of the buffer getting overwhelmed is higher in an environment where you are using separate file server because the folder paths are much larger.

- ASP.NET File Change Notifications and DNN

You can read more about fcnMode setting and how it affects ASP.Net applications here.

What’s causing file changes?

Default reaction when you come across such a setting or configuration value might be to turn that off and fcnMode does allow that as well - Disabled. But first, it is better that we understand what is causing file changes under the wwwroot folder and see if we can address that. The FCN Viewer helps visualize how many files and folders are being watched in as ASP.Net application.

In the Umbraco website, we are using a third party library ImageProcessor that helps to process images dynamically. The ImageProcessor caches images and the cache location is configurable. By default, it caches files under the App_Data/cache folder, which also happens to be one of the folders that the ASP.Net application monitors for changes. So anytime there are lots of files changing in the cache folder it causes the single monitor object monitoring the folders. This causes a buffer overflow and triggers an application restart due to Overwhelming file change notifications. However, ImageProcessor does allow moving the cache folder outside of the wwwroot folder. This causes the file not to be monitored and still work fine with the application. Since the library does not create the cache folder automatically, we need to make sure that the folder specified in the config file exists.

Having moved the cache folder outside of the wwwroot, I no longer need to update the fcnMode setting and can leave it as intended. If you are facing application restarts as well due to overwhelming change notification in wwwroot see what is likely causing the file changes and then try and fix that instead of just setting the fcnMode to disabled.

Hope that helps fix your application restarting problem!

Writing is hard work. A clear sentence is no accident. Very few sentences come out right the first time, or even the third time. Remember this in moments of despair. If you find that writing is hard, it’s because it is hard.

– William Zinsser, On Writing Well

Editing is the hardest part of writing and one that I skip the most. I try hard not to skip editing, but often I end up being lazy to do the hard work. I try to cover this up with some tools to make it faster. It is hard for one tool to get it all right. So it’s best if you have a range of tools under your belt to support you with writing.

The Hemingway Editor highlights hard to read sentences, adverbs, complicated phrases, etc. The writing app uses different colors to highlight the various issues as shown below. The editor also shows a summary of the text including the reading time, total words, sentences etc. The app also shows a Readability Grade using Automated Readability Index.

Hemingway Editor

The Hemingway Editor is available for free on the web with lesser features. The web application does not let you save your work. You are always at risk of loosing the work if you are authoring on the site.

The Windows/Mac applications supports a larger set of features but for a price. The desktop application works without an internet connection, allows publishing to Wordpress/Medium. It also supports to exporting to different formats (word, pdf, html, markdown etc.).

I use the Hemingway Editor (on the web) occasionally and find it useful at times. It’s good to double check for any issues before publishing the post. Hope this helps you as well in your writing.

How to sign a PDF using Azure Key Vault? - This is one of the questions that I often get regarding Azure Key Vault. In this post, we will explore how to sign a PDF using a certificate in Azure Key Vault. Signing a PDF has various aspects to it which are covered in depth in the white paper - Digital Signatures for PDF Documents. We will be using the iText library to sign the PDF. iText is available as a Nuget package library. The below image shows the elements that composes a digital signature on the left and actual contents on the right.

Digitally Signed PDF Contents

Signing with a Local Certificate

When the certificate (along with the private key) is available locally, signing the PDF is straightforward. You can load the certificate as an X509Certificate from the local certificate store using the thumbprint. Make sure that the certificate is installed with the Exportable option as shown below.

Exportable certificate

The PrivateKeySignature is an implementation of IExteralSignature that can be used to sign the PDF when the private key is available. The below code signs the Hello World.pdf using the certificate from the local store and saves that as Local Key.pdf.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
private static void SignPdfWithLocalCertificate()
{
    var certificate = GetCertificateLocal();
    var privateKey = DotNetUtilities.GetKeyPair(certificate.PrivateKey).Private;
    var externalSignature = new PrivateKeySignature(privateKey, "SHA-256");
    SignPdf(certificate, externalSignature, "Local Key.pdf");
}

private static void SignPdf(X509Certificate2 certificate, IExternalSignature externalSignature, string signedPdfName)
{
    var bCert = DotNetUtilities.FromX509Certificate(certificate);
    var chain = new Org.BouncyCastle.X509.X509Certificate[] { bCert };

    using (var reader = new PdfReader("Hello World.pdf"))
    {
        using (var stream = new FileStream(signedPdfName, FileMode.OpenOrCreate))
        {
            var signer = new PdfSigner(reader, stream, false);
            signer.SignDetached(externalSignature, chain, null, null, null, 0, PdfSigner.CryptoStandard.CMS);
        }
    }
}

Certificates in Azure Key Vault

You can manage certificates in Azure Key Vault as a first class citizen. Azure Key Vault supports creating new or uploading existing certificates into the vault. Key Vault provides an option to specify whether the private portion of the certificate is exportable or not. Let us see how we can use the certificate from the vault in both these scenarios.

Exportable Certificate

To create a self-signed certificate in the vault use the below PowerShell script. In this case, the private key is exportable. Executing the below script adds a self-signed certificate into the vault.

1
2
$certificatepolicy = New-AzureKeyVaultCertificatePolicy   -SubjectName "CN=www.rahulpnath.com"   -IssuerName Self   -ValidityInMonths 12
Add-AzureKeyVaultCertificate -VaultName "VaultFromCode" -Name "TestCertificate" -CertificatePolicy $certificatepolicy

Key Vault Certificate

Creating a certificate, in turn, creates three objects in the vault - Certificate, Key, and Secret. The certificate represents the certificate just created, the Key represents the private part of the certificate, and the Secret has the certificate in PFX format (just as if you had uploaded a PFX as a Secret). Since the certificate created above is exportable, the Secret contains the Private portion of the key as well. To recreate the certificate locally in memory, we use the below code

1
2
3
4
5
6
7
8
9
10
11
12
public static async Task<X509Certificate2> GetCertificateKeyVault(string secretIdentifier)
{
    var client = GetKeyVaultClient();
    var secret = await client.GetSecretAsync(secretIdentifier);

    var certSecret = new X509Certificate2(
        Convert.FromBase64String(secret.Value),
        string.Empty,
        X509KeyStorageFlags.Exportable);

    return certSecret;
}

The certificate is encoded as Base64String in the Secret. We create an in-memory representation of the certificate and mark it as Exportable. This certificate can be used the same way as the local certificate. Since the private key is part of it, the PrivateKeySignature can still be used to sign.

Non Exportable Certificate

To create a non-exportable certificate when creating the certificate use KeyNotExportable flag flag as below.

1
2
$certificatepolicy = New-AzureKeyVaultCertificatePolicy   -SubjectName "CN=www.rahulpnath.com"   -IssuerName Self   -ValidityInMonths 12 -KeyNotExportable
Add-AzureKeyVaultCertificate -VaultName "VaultFromCode" -Name "TestCertificateNE" -CertificatePolicy $certificatepolicy

Executing this creates three objects in the vault as above. But since we marked the secret as non-exportable, the Secret will not have the private part of the key. We can create an in-memory representation of the certificate, but we cannot use the PrivateKeySignature as the certificate does not have the private key. We need to use the Key created along with the certificate to Sign the PDF bytes. For this we need a custom implementation of IExternalSignature - KeyVaultSignature.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class KeyVaultSignature : IExternalSignature
{
    private KeyVaultClient keyClient;
    private string keyIdentifier;

    public KeyVaultSignature(KeyVaultClient client, string keyIdentifier)
    {
        keyClient = client;
        this.keyIdentifier = keyIdentifier;
    }
    public string GetEncryptionAlgorithm()
    {
        return "RSA";
    }

    public string GetHashAlgorithm()
    {
        return "SHA-256";
    }

    public byte[] Sign(byte[] message)
    {
        var hasher = new SHA256CryptoServiceProvider();
        var digest = hasher.ComputeHash(message);

        return keyClient
            .SignAsync(
                keyIdentifier,
                Microsoft.Azure.KeyVault.WebKey.JsonWebKeySignatureAlgorithm.RS256,
                digest)
            .Result.Result;
    }
}

KeyVaultSignature uses the key vault library to connect to the vault and use the specified key to sign the passed in PDF bytes. Since the key is the private part of the certificate, it will be validated by the public key. Below code shows how to use the KeyVaultSignature in the signing process.

1
2
3
4
5
6
7
8
9
10
11
private static async Task SignPdfWithNonExportableCertificateInKeyVault()
{
    var client = GetKeyVaultClient();
    var exportableSecretIdentifier = "https://vaultfromcode.vault.azure.net:443/secrets/TestCertificateNE";
    var certificate = await GetCertificateKeyVault(exportableSecretIdentifier);

    var keyIdentifier = "https://vaultfromcode.vault.azure.net:443/keys/TestCertificateNE/65d27605fdf74eb2a3f807827cd756e1";
    var externalSignature = new KeyVaultSignature(client, keyIdentifier);

    SignPdf(certificate, externalSignature, "Non Exportable Key Vault.pdf");
}

Once you install and trust the public portion of the certificates, you can see the green tick, indicating that the PDF is verified and signed.

Signed PDF

The sample code for all three scenarios is available here. I have used ClientId/Secret to authenticate with Key Vault for the sample code. If you are using this in a production environment, I will recommend using certificate to authenticate with Key Vault. iText supports creating PDF stamps and more features in the signing process, which is well documented. Hope this helps you to secure your PDF files.

Merging conflicts can be a pain when working in large teams and code bases change fast. Every time you sync with the main code base you need to make sure that you integrate the updates with your work in progress. It’s great when the changes get automatically merged by the source control systems. But when things need to be manually merged is where a good tool can be of help.

Beyond Compare

Beyond Compare allows you to quickly and easily compare your files and folders. By using simple, powerful commands, you can focus on the differences you’re interested in and ignore those you’re not. You can then merge the changes, synchronize your files, and generate reports for your records.

The Text Compare viewer is the one that I use most frequently to compare or merge different versions of code. Beyond Compare makes it easy to spot the code differences and gives the capability to copy the code from one version to another. One can also copy portions of changes and merge just that instead of all the changes. The 3-Way Text Merge feature is also useful, especially when you are using Git.

Beyond Compare has a Standard and a Pro Edition with a lot of features. I have the Pro Edition license for Beyond Compare, thanks to Readify for the Software allowance that you get every year.

Beyond Compare is one of the tools that I use almost daily. Check it out if you have not already.

When it comes to mobile phones, I used to be a gadget freak and buy the latest phone as soon as it arrives in the market. But not after I got the Nexus 5. It’s been well over three years now and not even once did I want to switch over to something else. The phone never slowed down over these years, and the battery lasted enough for the day. I broke the Nexus screen a couple of months back and was forced to switch over to a new phone. I didn’t have to do much market research or phone comparisons this time; as I just knew which one to pick up.

The Pixel, Phone By Google!

Pixel

Key Features

The Pixel is the first phone in the Google Pixel hardware line and is a successor to the Nexus range of devices. The Pixel runs Android-Nougat with some features exclusive to it. The Pixel is very much comparable to the Nexus, with improved hardware and latest software. Below are some of the key features that I like about the phone.

  • Fingerprint sensor The fingerprint sensor on the back allows you to unlock the phone and specific apps and also authorize purchases. 1Password is integrated with the sensor which enables me to unlock it using the sensor. I no longer need to type in my long master password every time I use 1Password.

  • Camera Pixel takes brilliant photos in bright light, low light, and any light. The camera is awesome and at times is a good replacement for your DSLR. With Pixel, Google also provides unlimited original quality storage for all photos and videos.

  • Google Assistant Pixel is the first phone with the Google Assistant built-in. It helps you manage tasks, find content on phone and the internet, perform actions, etc. just like Siri or Cortana. I have not used the assistant much and yet to find some valid use cases.

Apps

Some of the apps that I use regularly are

Case

I am using the Spigen Neo Hybrid case for Pixel. It adds extra weight to the phone, but I am used to it from my Nexus as well. The Neo Hybrid provides military grade protection and protects the phone from most falls.

Overall I have found the Pixel a good phone and recommend it to anyone looking out for a new phone now. The Samsung Galaxy S8 is also of the same range (slightly costlier) and worth considering if you are not too particular about having a vanilla Android experience. Which phone do you use?

Grammarly is a writing platform that helps you polish your writing. Grammarly scans your text for mistakes based on pre-written rules and suggests modifications. A detailed explanation is given on the suggestions made and helps improve your writing over a period.

Grammarly makes sure everything you type is clear, effective, and mistake-free.

With the chrome plugin, Grammarly is there everywhere that you write on the web. It hides away neatly to a corner without getting in the way of your writing.

Grammarly Chrome plugin

You can author text either in the website, or have Grammarly watch your back as you type anywhere in Chrome with the plugin, integrate it with Microsoft Office or use the Windows app. Check out the apps section to get more details. You can try Grammarly for free and check for critical grammar and spelling mistakes. With the premium version you get a lot more features and grammar rules. Currently, I am on the premium version and recommend it.

Try Grammarly for free and see if it helps you as well.

Aesthetics of code is as important as the code you write. Aligning is an important part that contributes to the overall aesthetics of code. The importance of code aesthetics struck me while on a recent project. Below are some the code samples that I came across in the project. Traversing this code base was painful to me as the code was all over the place.

Bad Formatting
1
2
3
4
5
6
7
8
9
10
11
public class Account
{
    public long   Id                    { get; set; }
    public string ClientId              { get; set; }
    public long   ContactId             { get; set; }
    public string UserName              { get; set; }
    public string Name                  { get; set; }
    public string Company               { get; set; }
    public string Address               { get; set; }
    public string BillingAddress        { get; set; }
}
Bad Formatting
1
2
3
4
5
6
7
public ConnectToServer(string username,
                       string password,
                       string server,
                       string port)
{
    ...
}

The code has too many alignment points that attract the eye which makes it hard to read in the first place. When in isolation this might still be fine to read, but with such a style across the code base, it soon becomes a pain for your eyes and your mind. When refactoring code, it becomes even harder as you need to put in the extra effort to make sure that this fancy alignment is maintained. Let’s take a look at how even changing a property name (Company to CompanyName) or function name(ConnectToServer to Connect) will affect the current formatting.

Renamed to CompanyName
1
2
3
4
5
6
7
8
public class Account
{
    ...
    public string Name              { get; set; }
    public string CompanyName           { get; set; }
    public string Address           { get; set; }
    ...
}
Renamed to Connect
1
2
3
4
5
6
7
public Connect(string username,
                       string password,
                       string server,
                       string port)
{
    ...
}

As you can see above the formatting is now all over the place, and you need to format them into place manually. Again when in isolation this might seem like a few press of spacebar. But when the property/function that you rename is used in multiple places this soon becomes a problem. Such code formatting introduces maintenance overhead and soon falls out of place if something gets missed.

Better Ways To Format Code

Left aligning code is one of the key things that I try to follow always. Keeping the code aligned to the left makes it easier to read (assuming that you are programming in a language written from left to right). Since we read from left to right having most of the code aligned to the left means that you have more code visible. Left aligning also means that you would almost avoid the need to scroll the code editor when reading through the code horizontally.

Let’s take a look how left aligning the above code will look like.

Left Aligned
1
2
3
4
5
6
7
8
9
10
11
public class Account
{
    public long Id { get; set; }
    public string ClientId { get; set; }
    public long ContactId { get; set; }
    public string UserName { get; set; }
    public string Name { get; set; }
    public string Company { get; set; }
    public string Address { get; set; }
    public string BillingAddress { get; set; }
}
Left Aligned Multiple Lines
1
2
3
4
5
6
7
8
public ConnectToServer(
    string username,
    string password,
    string server,
    string port)
{
    ...
}
Left Aligned Single Line
1
2
3
4
5
public ConnectToServer(
    string username, string password, string server, string port)
{
    ...
}

As you can see above left aligning makes it much easier to read and also reduces the number of alignment points. This is also refactoring friendly as there are no specific space patterns that need to be maintained. As for the parameters in a single line VS parameters in multiple lines (as above), I prefer the multi-line approach, as it keeps the code further aligned to the left and also reduces the chance of getting a horizontal scroll bar. You can use Column Guides to remind yourself to keep the code within the acceptable horizontal space.

Code formatting is an important aspect of coding. It is important that as a team you need to agree on some standard practices and find ways to stick to it. You can use styling tools, Code Reviews, etc. to make sure it does not get missed. It takes a while for any new practices to set in, but soon it will be of second nature and easy to follow.

Fiddler is an HTTP debugging proxy server application and captures HTTP and HTTPS traffic. It is one of the tools in my essential toolkit list. Fiddler allows debugging traffic from PC, MAC, Linux and mobile systems. It helps inspect the raw requests and responses between the client and the server.

Fiddler

Some of the key features that I often use in Fiddler are

  • Inspect Request/Response Look into the request and response data to see if all the required headers/attributes are set, and the data is sent as expected

  • Compose Web Requests Manually compose requests to send to server and test endpoints.

  • AutoRespond to Requests Intercept requests from the browser and send back a pre-defined response or create a delay in response to the actual client.

  • Statistics Fiddler statistics give an overview of the performance details of a web session, indicating where the time is spent in the whole request/response cycle.

  • Modify and Replay a Request Fiddler allows modifying the request by editing its contents and replay the message to the server.

  • Export and Import Fiddler makes it easy to share captured traces with different people. All captured traffic or selected requests can be exported and shared with others. The exported saz file can be opened in Fiddler to view back all the session details.

These are some of the features that I use on a regular basis. Fiddler supports a lot more and is extensible to support custom requirements as well. I find it an indispensable tool when developing for the Web. Get Fiddler if you have not already.

The project that I am currently working on using Team Foundation Version Control(TFVC) as it’s source control. After using Git for a long time it felt hard to move back to TFVC. One of the biggest pain for me is losing the short commits that I do when working. Short commits help keep track of the work and also quickly revert unwanted changes. Branching is also much easier with Git and allows to switch between work without much hassle of ‘shelving -> undoing -> pulling back the latest as with TFS.

Use Git Locally in a TFVC Repository

The best part with git is that you can use it to work with any folder in your system and does not need any setup. By just running ‘git init’ it initializes a git repository in the folder. Running init against my local TFVC source code folder, I initialized a git repository locally. Now it allows me to work locally using git - make commits, revert, change branches, etc. Whenever I reach a logical end to the work, I create a shelveset and push the changes up the TFVC source control from Visual Studio.

If you want to interact with the TFVC source control straight from the command line, you can try out git-tfs - a Git/TFS bridge. For me, since I am happy working locally with git and pushing up the finished work as shelvesets from Visual Studio I have not explored the git-tfs tool.

Hope this helps someone if you feel stuck with TFVC repositories!