The project that I am currently working on using Team Foundation Version Control(TFVC) as it’s source control. After using Git for a long time it felt hard to move back to TFVC. One of the biggest pain for me is losing the short commits that I do when working. Short commits help keep track of the work and also quickly revert unwanted changes. Branching is also much easier with Git and allows to switch between work without much hassle of ‘shelving -> undoing -> pulling back the latest as with TFS.

Use Git Locally in a TFVC Repository

The best part with git is that you can use it to work with any folder in your system and does not need any setup. By just running ‘git init’ it initializes a git repository in the folder. Running init against my local TFVC source code folder, I initialized a git repository locally. Now it allows me to work locally using git - make commits, revert, change branches, etc. Whenever I reach a logical end to the work, I create a shelveset and push the changes up the TFVC source control from Visual Studio.

If you want to interact with the TFVC source control straight from the command line, you can try out git-tfs - a Git/TFS bridge. For me, since I am happy working locally with git and pushing up the finished work as shelvesets from Visual Studio I have not explored the git-tfs tool.

Hope this helps someone if you feel stuck with TFVC repositories!

One of the traits of a good unit test is to have just one Assert statement..

Consider Assert failures as symptoms of a disease and Asserts as indication points or blood checks for the body of the software. The more symptoms you can find, the easier the disease will be to figure out and remove. If you have multiple asserts in one test - only the first failing one reveals itself as failed and you lose sight of other possible symptoms.

-Roy Osherove

When a test with multiple asserts fails, it is hard to tell the exact reason of test failure. To get more details on the actual failure we either have to debug the tests or look into the stack trace.

Tests With Multiple Assertions

Many times we end up needing to assert on more than one properties or behavior. Let’s look at a few such examples and see how we can refactor the tests. I have excluded the actual code that is getting is tested here as it is easy to understand what that will look like from the tests. (Drop a comment otherwise)

Example 1: In the below test we have a Name class that represents FirstName and LastName of a user. It exposes a Parse method to make it easy for us to create a Name object from a string. Below are some tests for the Parse method. The test has multiple assertions to confirm that the first and last name properties get set as expected.

Name class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Theory]
[InlineData("Rahul", "Rahul", "")]
[InlineData("Rahul Nath", "Rahul", "Nath")]
[InlineData("Rahul P Nath", "Rahul", "P Nath")]
public void FirstNameOnlyProvidedResultsInFirstNameSet(
   string name,
   string expFirstName,
   string expLastName)
{
    var actual = Name.Parse(name);

    Assert.Equal(expFirstName, actual.FirstName);
    Assert.Equal(expLastName, actual.LastName);
}

Example 2: The below test is for the Controller class to confirm that the CustomerViewModel passed to the Post method on the controller saves the Customer to the repository. The assert statement includes multiple properties of the customer object, which is just a shorthand version of writing multiple such assert statements on each of those properties.

Controller Unit Test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Theory, AutoWebData]
public void PostSavesToRepository(
    CustomerViewModel model,
    [Frozen]Mock<ICustomerRepository> customerRepository,
    CustomerController sut)
{
  var expected = model.ToCustomer();

  sut.Post(model);

  customerRepository.Verify(a =>
    a.Upsert(It.IsAny<Customer>(customer =>
        customer.Name == expected.Name &&
        customer.Age == expected.Age &&
        customer.Phone == customer.Phone))
}

Example 3: The below test ensures that all properties are set when transforming from DTO to domain entity (or any such object transformations at system boundaries). The test asserts on every property of the class.

Comparing different object types
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Theory]
[AutoMoqData]
public void AllowanceToDomainModelMapsAllProperties(
    Persistence.Allowance allowance,
    int random)
{
    allowance.EndDate = allowance.StartDate.AddDays(random);

    var actual = allowance.ToDomainModel();

    Assert.Equal(allowance.ClientId, actual.ClientId);
    Assert.Equal(allowance.Credit, actual.Credit);
    Assert.Equal(allowance.Data, actual.Data);
    Assert.Equal(allowance.StartDate, actual.Period.StartDate);
    Assert.Equal(allowance.EndDate, actual.Period.EndDate);
}

Semantic Comparison Library

Semantic Comparison is a library that allows deep comparison of similar looking objects. Originally part of AutoFixture library, it is also available as a separate Nuget package.

SemanticComparison makes it easier to compare instances of various objects to each other. Instead of performing a normal equality comparison, SemanticComparison compares objects that look semantically similar - even if they are of different types

Using SemanticComparison, we can compare two objects and compare their properties for equality. It allows including/excluding properties when comparing objects.

Refactoring Tests

Example 1: The Name is a perfect case for being a Value Object. In this case, the class will override Equals, and it will be easier for us to write the tests. Converting to a Value Object is one of the cases where we use tests as a feedback to improve code. But in cases where you do not have the control over the class or do not want to make it a value object, we can use SemanticComparison to help check for equality as shown below.

Name Class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Theory]
[InlineData("Rahul", "Rahul", "")]
[InlineData("Rahul Nath", "Rahul", "Nath")]
[InlineData("Rahul P Nath", "Rahul", "P Nath")]
public void FirstNameOnlyProvidedResultsInFirstNameSet(
   string name,
   string expFirstName,
   string expLastName)
{
    var expected = new Name(expFirstName, expLastName);

    var actual = Name.Parse(name);

    expected
        .AsSource()
        .OfLikeness<Name>()
        .ShouldEqual(actual);
}

Example 2: Using SemanticComparison we can remove the need of asserting on each of the properties. In the below case since the Customer Id is set to a new Guid in the ToCustomer method, I ignore the Id property from the comparison using Without. When the expected objects gets compared against the actual all properties except Id will be compared for equality. Any number of properties can be excluded by chaining multiple Without methods.

Controller Unit Test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Theory, AutoWebData]
public void PostSavesToRepository(
    CustomerViewModel model,
    [Frozen]Mock<ICustomerRepository> customerRepository,
    CustomerController sut)
{
  var customer = model.ToCustomer();
  var expected = customer
      .AsSource()
      .OfLikeness<Customer>()
      .Without(a => a.Id);

  sut.Post(model);

  customerRepository.Verify(a =>
    a.Upsert(It.IsAny<Customer>(actual =>
        expected.ShoudEqual(actual)));
}

Example 3: Using SemanticComparison we can remove the asserts on every property and also set custom comparisons. The StartDate and EndDate on the persistence entity are converted into a DateRange object (Period). By using the With method in combination with the EqualsWhen method we can set custom comparison behavior that needs to be performed when comparing objects. The same test will hold true even if we add new properties and will force mapping to be updated if any of the property mappings is missed. Here we also see how SemanticComparison can compare two different types.

Comparing different object types
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Theory]
[AutoMoqData]
public void AllowanceToDomainModelMapsAllProperties(
    Persistence.Allowance allowance,
    int random)
{
    allowance.EndDate = allowance.StartDate.AddDays(random);

    var actual = allowance.ToDomainModel();

    allowance
        .AsSource()
        .OfLikeness<Allowance>()
        .With(a => a.Period)
        .EqualsWhen((p, m) => { return m.Period.StartDate == p.StartDate && m.Period.EndDate == p.EndDate; })
        .ShouldEqual(actual);
}

Using SemanticComparison library, we reduce the dependencies on the actual implementation and extract that into a more generic representation. Fewer dependencies on the actual implementation code/properties make the tests more robust and adaptable to change. Hope this helps you get started with Semantic Comparison and improve on your test assertions.

References:

If you are looking to get a case for your phone, then check out cases by Spigen. I have been using Spigen cases for over three years, and I totally recommend it.

Spigen cases are made with premium materials and are slim, sleek and simple. Spigen provides various models that match different needs. The cases provide Military Grade Protection and protect the phone from most falls. It fits

The first Spigen cases I got was for my Nexus 5. The Neo Hybrid lasted for over three years. With the Spigen cases, the Nexus was well protected. It did fall from my hands many times, and every time the case protected it well enough. A couple of months back the Spigen case broke, and I had been using the phone without a cover since then. Unfortunately, during one of my morning runs the phone fell from my hands while slipping it into the armband. Without the Spigen to protect it, the Nexus screen broke at the corners.

Spigen Neo Hybrid for Pixel

A month back I switched over to Google Pixel as the Nexus was becoming unusable with the broken screen. I got the Spigen NeoHybrid for Pixel. The case provides dual layer protection with TCU and PC bumper and comes with fingerprint resistance. The precise cutouts give easy access to all buttons and fingerprint sensor.

If you are looking to a get a case, check out if Spigen has one for your model!

Azure Key Vault from Node js

If you develop on Node.js, you can use the Azure SDK for Node that makes it easy to consume and manage Microsoft Azure Services. In this post let’s explore how to use the node SDK to connect to Azure Key Vault and interact with the vault objects. If you are new to key vault check out my other posts here to get started.

The azure-keyvault npm (node package manager) package allows accessing keys, secrets, and certificates on Azure Key Vault. It required Node.js version 6.x.x or higher. You can get the latest Node.js version here.

Package Features

  • Manage keys: create, import, update, delete, backup, restore, list and get.

  • Key operations: sign, verify, encrypt, decrypt, wrap, unwrap.

  • Secret operations: set, get, update and list.

  • Certificate operations: create, get, update, import, list, and manage contacts and issuers.

It is easy to setup a new project and execute code using Node. The ease of setup is one of the things that I liked about node. To try out the Key Vault package, you can start fresh in a new folder and create a javascript file - main.js (you can name it anything you want).

The following packages are required to connect to the vault and authenticate. The azure-keyvault package as we saw above provides capabilities to interact with the vault. The adal-node is the Windows Active Directory Authentication Library for Node. The package makes it easy to authenticate to AAD to access AAD protected web resources. Applications using key vault need to authenticate using a token from an Azure AD Application.

1
2
const KeyVault = require('azure-keyvault');
const { AuthenticationContext } = require('adal-node')

Authenticate Using ClientId and Secret

Create the Azure AD application and the Secret key as shown in this post. Grab the ClientId and Secret for authentication from the node application.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const clientId = "CLIENT ID";
const secret = "SECRET";

var secretAuthenticator = function (challenge, callback) {

    var context = new AuthenticationContext(challenge.authorization);
    return context.acquireTokenWithClientCredentials(
        challenge.resource,
        clientId,
        secret,
        function (err, tokenResponse) {
            if (err) throw err;

            var authorizationValue = tokenResponse.tokenType + ' ' + tokenResponse.accessToken;
            return callback(null, authorizationValue);
        });
};

To access the vault, we need to create an instance of the KeyVaultClient object which taken in a Credentials as shown below. The KeyVaultClient has different methods exposes to interact with keys, secrets, and certificates in the vault. For e.g. To retrieve a secret from the vault the getSecret method is used passing in the secret identifier.

1
2
3
4
5
6
7
8
9
const secretUrl = "https://rahulkeyvault.vault.azure.net/secrets/ApiKey/b56396d7a46f4f848481de2e149ef069";
var credentials = new KeyVault.KeyVaultCredentials(secretAuthenticator);
var client = new KeyVault.KeyVaultClient(credentials);

client.getSecret(secretUrl, function (err, result) {
    if (err) throw err;

    console.log(result);
});

Authenticate Using ClientId and Certificate

To authenticate using ClientId and Certificate the AuthenticationContext exposes a function acquireTokenWithClientCertificate which takes in the certificate (pem format) and the certificate thumbprint. If you already have a certificate go ahead and use that. If not create a new test certificate as shown below

1
2
makecert -sv mykey.pvk -n "cn=AD Test Vault Application" ADTestVaultApplication.cer -b 03/03/2017 -e 06/05/2018 -r
pvk2pfx -pvk mykey.pvk -spc ADTestVaultApplication.cer -pfx ADTestVaultApplication.pfx -po test

Create a new AD application and set it to use certificate authentication. Assign the application permissions to access the key vault.

1
2
3
4
5
6
7
8
9
10
11
12
$certificateFilePath = "C:\certificates\ADTestVaultApplication.cer"
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$certificate.Import($certificateFilePath)
$rawCertificateData = $certificate.GetRawCertData()
$credential = [System.Convert]::ToBase64String($rawCertificateData)
$startDate= [System.DateTime]::Now
$endDate = $startDate.AddYears(1)
$adApplication = New-AzureRmADApplication -DisplayName "CertAdApplication" -HomePage  "http://www.test.com" -IdentifierUris "http://www.test.com" -CertValue $credential  -StartDate $startDate -EndDate $endDate

$servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $adApplication.ApplicationId

Set-AzureRmKeyVaultAccessPolicy -VaultName 'RahulKeyVault' -ServicePrincipalName $servicePrincipal.ServicePrincipalNames[0] -PermissionsToSecrets all -PermissionToKeys all

To convert the pvk file into the pem format that is required by adal-node to authenticate with the AD application use the below command.

1
openssl rsa -inform pvk -in mykey.pvk -outform pem -out mykey.pem

Using the pem encoded certificate private key, we can authenticate with the vault as shown below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function getPrivateKey(filename) {
    var privatePem = fs.readFileSync(filename, { encoding: 'utf8' });
    return privatePem;
}

var certificateAuthenticator = function (challenge, callback) {
    var context = new AuthenticationContext(challenge.authorization);

    return context.acquireTokenWithClientCertificate(
        challenge.resource,
        clientId,
        getPrivateKey("mykey.pem"),
        "CERTIFICATE THUMBPRINT",
        function (err, tokenResponse) {
            if (err) throw err;

            var authorizationValue = tokenResponse.tokenType + ' ' + tokenResponse.accessToken;
            return callback(null, authorizationValue);
        }
    )
};

Using the certificateAuthenticator is the same as using the secretAuthenticator, by passing it to KeyVaultCredentials

1
2
3
4
5
6
7
8
var credentials = new KeyVault.KeyVaultCredentials(certificateAuthenticator);
var client = new KeyVault.KeyVaultClient(credentials);

client.getSecret(secretUrl, function (err, result) {
    if (err) throw err;

    console.log(result);
});

To run the application first run npm install to install all the required packages and then execute the js file using node main.js. It fetches the secret value from the key vault using the certificate or secret authenticator. Hope this helps you to get started with Azure Key Vault from Node.js.

When I upload images to my blog, I try to keep the file size as small as possible. Reducing image size helps improve the site load time. To grab screenshots I use Snagit and use Paint.net for any editing and resizing the images.

To further optimize images and compress them I use PNGGauntlethttps://pnggauntlet.com/. PNGGauntlet combines PNGOUT, OptiPNG and DeflOpt and helps smash PNG’s to the smallest size.

PNG Gauntlet

PNGGauntlet provides options to configure the PNG output. You can play around with the options for the best output. I use the default options and has been working fine for me.

Check out PNGGauntlet to optimize images for the web.

If you are wondering what Git Credential Manager (GCM) is, then possibly you see the below screen very often when you are interacting with your git repositories.

Enter your Credentials, git

On Windows, you can use Git Credential Manager for Windows which integrates with git and provides the credentials whenever required. GCM removes the need for you to enter the credentials when using the git repositories.

Cmder is a portable console emulator for Windows. I prefer to use git from the command line and find the cmder experience good. Check out the youtube video for more details.

To set up GCM with Cmder download the latest release of GCM in the zip format. Unzip the package under the vendor folder in cmder. Run the install.cmd from within the unzipped GCM package.

Vendor folder under cmder

Once you run the install script, the git config will be updated to use the credential manager. Running git config –list will show the credential.helper set to manager. If this is not automatically set you can set it manually by running

Set GCM as git credential manager
1
git config --global credential.helper manager

For GUI prompts for entering credentials use

Enable Gui prompt for passwords
1
git config --global credential.modalprompt true

Hope that saves you some time if you were entering the credentials every time you push/pull from a git repository.

A unit test suite provides immediate feedback when you make a change. A passing test suite gives the confidence on the changes made. It’s the confidence that the team has on the tests suite that matters more than the code coverage number. Tests also provide feedback about the code. It suggests how easy or difficult it is to use the code just written since tests are the first consumers of the code. Different kinds of Test Smells indicates a problem with the code that is getting tested or the test code itself and provides feedback to improve it.

Test Feedback

Let’s take a look at a couple of Test Smells and see what changes can be made to improve the code.

Multiple Asserts on Class Properties

Tests should ideally follow the Single Responsibility Principle (SRP). It should test one thing and try to limit that to one Assert statement. Often I come across tests that assert multiple things. At times this could just be that we are testing all side-effects of the method that is getting tested. Such tests can be broken down into separate tests which test just one thing each. In certain other cases, the effects of the method that is getting tested itself are spread across multiple properties. Let’s see a simple example of one such case. Below is a DateRange class which takes in a StartDate and EndDate and creates a DateRange class if the endDate is greater than startDate.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class DateRange
{
    public readonly DateTime StartDate;

    public readonly DateTime EndDate;

    public DateRange(DateTime startDate, DateTime endDate)
    {
        if (endDate < startDate)
            throw new ArgumentException("End date cannot be less than start Date");

        StartDate = startDate;
        EndDate = endDate;
    }

    public static DateRange MonthsFromDate(DateTime date, int numOfMonths)
    {
        return new DateRange(date   , DateTime.Today.AddMonths(numOfMonths));
    }

    public bool IsInRange(DateTime theDateTime)
    {
        return theDateTime >= StartDate && theDateTime <= EndDate;
    }
}

Let’s take a look at one of the tests that check for the successful creation of a DateRange object using the MonthsFromDate function. In the tests below you can see that there are two statements to assert that the DateRange object is created successfully. In this particular case, the assertions are limited to two, but could often be more than that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Theory]
[InlineData("01-Jan-2017", 2, "01-Mar-2017")]
[InlineData("01-Jan-2017", 0, "01-Jan-2017")]
[InlineData("01-Jan-2017", 27, "01-Apr-2019")]
public void MonthsFromDateReturnsExpected(
    string startDateString,
    int monthsFromNow,
    string endDateString)
{
    var startDate = DateTime.Parse(startDateString);
    var endDate = DateTime.Parse(endDateString);

    var actual = DateRange.MonthsFromDate(startDate, monthsFromNow);

    Assert.Equal(startDate, actual.StartDate);
    Assert.Equal(endDate, actual.EndDate);
}

I can think if two ways to solve the above problem. One is to refactor the test code and the other to refactor the DateRange class itself. Both methods involve creating the expected DateRange object upfront and then comparing against it for equality. The tests can be refactored using SemanticComparison library.

Refactor Test using SemanticComparison
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[Theory]
[InlineData("01-Jan-2017", 2, "01-Mar-2017")]
[InlineData("01-Jan-2017", 0, "01-Jan-2017")]
[InlineData("01-Jan-2017", 27, "01-Apr-2019")]
public void MonthsFromDateReturnsExpectedUsingSemanticComparison(
   string startDateString,
   int monthsFromNow,
   string endDateString)
{
    var startDate = DateTime.Parse(startDateString);
    var endDate = DateTime.Parse(endDateString);
    var expected = new DateRange(startDate, endDate);

    var actual = DateRange.MonthsFromDate(startDate, monthsFromNow);

    expected
        .AsSource()
        .OfLikeness<DateRange>()
        .ShouldEqual(actual);
}

In this particular case looking closely at the system under test (SUT), the DateRange class, we understand that it can be a Value Object. Any two instances of DateRange with the same start and end date can be considered equal. Equality is based on the value contained and not on any other identity. Though in all cases that you observe this behavior it might not be possible for you to convert it into a value object. In those case use the approach mentioned below. But in cases where you have control over it, override Equals and GetHashCode to implement value equality. The test is much simpler and had less code

Refactor DateRange to ValueObject
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Theory]
[InlineData("01-Jan-2017", 2, "01-Mar-2017")]
[InlineData("01-Jan-2017", 0, "01-Jan-2017")]
[InlineData("01-Jan-2017", 27, "01-Apr-2019")]
public void MonthsFromDateReturnsExpectedUsingValueObject(
   string startDateString,
   int monthsFromNow,
   string endDateString)
{
    var startDate = DateTime.Parse(startDateString);
    var endDate = DateTime.Parse(endDateString);
    var expected = new DateRange(startDate, endDate);

    var actual = DateRange.MonthsFromDate(startDate, monthsFromNow);

    Assert.Equal(expected, actual);
}

Complicated Test Setup and Test Code Duplication

At times we run into cases where setting up the sut is complicated and is a lot of code. Complicated setup often leads to Test code duplication.

A complicated test setup warrants ‘cut-copy-paste’ to test different aspects of the sut.

From my experience, I have seen this happen more for the test setup phase. The test setup phase is identical across a set of tests with only the assertions being different. Let us look into some common reasons why test setup can becoming complicated leading to test code duplication as well.

Violating Single Responsibility Principle (SRP)

The test setup can get complicated when the sut violates Single Responsibility Principle (SRP). When there are too many things that are getting affected by the sut, the setup and the verification phases become complex. In these cases extracting the responsibilities as injected dependencies help reduce complexity. The tests can then use mocks to test the sut in isolation. The post, Refactoring to Improve Testability: Extracting Dependencies looks into an end to end scenario of this case and how it can be improved.

Violating SRP also leads to test code duplication as multiple aspects need testing and the setup looks almost similar. Refactoring the sut and the test code are ways that test code can be made more robust in these cases.

SUT Constraints

Test Code Duplication can occur when there are constraints on a constructor, and the test needs to construct it. Let’s take the example of DateRange class we saw above. The DateRange constructor takes in two dates, startDate and endDate. But the constructor has a rule enforced that endDate must be greater than startDate. In such cases, I often see tests that have DateRange as a property directly or indirectly (as properties on other objects) creating them explicitly.

Explicitly create objects with Constraints
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Theory]
[InlineData("1 Jan 2016", "1 Mar 2016", "20 Feb 2016")]
[InlineData("11 Apr 2016", "30 Mar 2017", "26 Dec 2016")]
public void DateInBetweenStartAndEndDateIsInRangeManualSetup(
    string startDateString,
    string endDateString,
    string dateInBetween)
{
    var startDate = DateTime.Parse(startDateString);
    var endDate = DateTime.Parse(endDateString);
    var date = DateTime.Parse(dateInBetween);
    var sut = new DateRange(startDate, endDate);

    var actual = sut.IsInRange(date);

    Assert.True(actual);
}

We cannot depend on the default behavior of AutoFixture to generate a DateRange object for us, as it does not know about this constraint and will always pass two random dates to the constructor. The below test is not repeatable and can fail at times if AutoFixture sends the endDate less than the start date.

Using AutoFixture on classes that have constraints can lead to tests that are not repeatable
1
2
3
4
5
6
7
8
9
10
[Theory]
[InlineAutoData]
public void DateInBetweenStartAndEndDateIsInRange(DateRange sut)
{
    var rand = new Random();
    var date = sut.StartDate.AddDays(rand.Next(0, (sut.EndDate - sut.StartDate).Days - 1));
    var actual = sut.IsInRange(date);

    Assert.True(actual);
}

To make the test repeatable, we must be able to generate a DateRange class successfully every time we ask AutoFixture for one. For this, we add a DateRange customization and plug it into the Fixture creation pipeline. The customization makes sure that the DateRange class constructor parameters match the constraints.

DateRange AutoFixture Customization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class InlineCustomizedAutoDataAttribute : AutoDataAttribute
{
    public InlineCustomizedAutoDataAttribute()
        : base(new Fixture().Customize(new DateRangeCustomization()))
    {
    }
}

public class DateRangeCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        fixture.Customizations.Add(new DateRangeSpecimenBuilder());
    }
}
public class DateRangeSpecimenBuilder : ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var requestAsType = request as Type;
        if (typeof(DateRange).Equals(requestAsType))
        {
            var startTime = context.Create<DateTime>();
            var range = context.Create<uint>();
            return new DateRange(startTime, startTime.AddDays(range));
        }

        return new NoSpecimen();
    }
}

The tests can now be updated to use the InlineCustomizedAutoDataAttribute instead of the default InlineAutoDataAttribute. The tests are repeatable now as we can be sure that AutoFixture will always generate a valid DateRange object.

Public vs. Private for Tests

It often happens that we get into discussions on whether a function should be private or public. We think it is a bad idea to write production code in a way to suit tests. To test private methods, you can employ techniques of reflection or use InternalsVisibleTo attribute. But this is a smell in itself.

Tests should be through public API of the class. If it gets difficult to test through the API, it hints that the code is dealing with different responsibilities or has too many dependencies.

There are valid use cases for the private and internal access modifiers, but the majority of the time I see private and internal code, it merely smells of poor design. If you change the design, you could make types and members public, and feel good about it.

-Unit Testing Internals, Mark Seemann

Consider refactoring your code so that it is easier to test. Tests are the first consumers of code, and it helps shape the public API and the way it gets consumed. It is fine to have tests affect the way you write code. What is not fine is to have explicit loops within the production code, just for test code. The problem with having such code is that the other code loop never gets tested.

Tests act as a feedback tool and it is important that you listen to it. If you decide to bear the pain of writing tests ignoring the feedback just to meet some code coverage numbers then you are doing it wrong. Most of the cases you will end up with hard to maintain code and fragile tests. Listen to the feedback and incorporate it into the code you write.

For my blog’s I often have to edit, resize, modify images often. The default paint application with Windows lacks a lot of features while Photoshop is too advanced for my needs. Paint.net is a freely available tool with many advanced features and is lightweight.

Paint.NET is free image and photo editing software for PCs that run Windows. It features an intuitive and innovative user interface with support for layers, unlimited undo, special effects, and a wide variety of useful and powerful tools

Paint.Net

In addition to the basic features expected off an image editing tool, paint.net supports layers, create special effects with unlimited history/undo. It also includes gradient tool, Magic Wand, Clone Stamp, etc. for advanced editing. Every action performed on the image is recorded in the History window and allows undoing.

Download the latest version of paint.net for free.

In the previous post we saw how to connect to Azure Key Vault from Azure Functions. We used the Application Id and Secret to authenticate with the Azure AD Application. Since the general recommendation is to use certificate-based authentication, in this post, we will see how we can use certificates to authenticate from within an Azure Function.

First, we need to create an Azure AD application and set it up to use certificate-based authentication. Create a new service principal for the AD application and associate that with the Azure Key Vault. Authorize the AD application with the permissions required. In this case, I am providing all access to keys and secrets.

1
2
3
4
5
6
7
8
9
10
11
12
$certificateFilePath = "C:\certificates\ADTestVaultApplication.cer"
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$certificate.Import($certificateFilePath)
$rawCertificateData = $certificate.GetRawCertData()
$credential = [System.Convert]::ToBase64String($rawCertificateData)
$startDate= [System.DateTime]::Now
$endDate = $startDate.AddYears(1)
$adApplication = New-AzureRmADApplication -DisplayName "CertAdApplication" -HomePage  "http://www.test.com" -IdentifierUris "http://www.test.com" -CertValue $credential  -StartDate $startDate -EndDate $endDate

$servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $adApplication.ApplicationId

Set-AzureRmKeyVaultAccessPolicy -VaultName 'RahulKeyVault' -ServicePrincipalName $servicePrincipal.ServicePrincipalNames[0] -PermissionsToSecrets all -PermissionToKeys all

Create an Azure Function App under your subscription as shown below. You can also use the same application created in the previous post (if you did create one).

Azure Function New App

In the Function Apps page, select the app just created. Add a new function like in the last post. Selecting the Function App shows the available set of actions. Under the Platform Features tab we can upload the SSL certificates first and then update the Application Certificates to make the certificate available for the function.

Azure Function Platform Features

Upload the certificate by selecting it from your folder system.

Azure Function Upload Certificate

For the certificate to be available for use in the Azure Functions an entry should be present in Application Settings. Under Application Settings in the Platform Features tab add App settings key and value - WEBSITE_LOAD_CERTIFICATES and the certificate thumbprint This makes the certificate available for consumption within the function. Multiple thumbprints can be specified comma separated if required.

Azure Function Certificates App Settings

Using a certificate to authenticate with the Key Vault is the same as we have seen before.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
using System;
using Microsoft.Azure.KeyVault;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;

private const string applicationId = "AD Application ID";
private const string certificateThumbprint = "Certificate Thumbprint";

public async static Task Run(TimerInfo myTimer, TraceWriter log)
{
    var keyClient = new KeyVaultClient(async (authority, resource, scope) =>
{
    var authenticationContext = new AuthenticationContext(authority, null);
    X509Certificate2 certificate;
    X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    try
    {
        store.Open(OpenFlags.ReadOnly);
        X509Certificate2Collection certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, certificateThumbprint, false);
        if (certificateCollection == null || certificateCollection.Count == 0)
        {
            throw new Exception("Certificate not installed in the store");
        }

        certificate = certificateCollection[0];
    }
    finally
    {
        store.Close();
    }

    var clientAssertionCertificate = new ClientAssertionCertificate(applicationId, certificate);
    var result = await authenticationContext.AcquireTokenAsync(resource, clientAssertionCertificate);
    return result.AccessToken;
});

    var secretIdentifier = "https://rahulkeyvault.vault.azure.net/secrets/mySecretName";
    var secret = await keyClient.GetSecretAsync(secretIdentifier);

    log.Info($"Secret Value: {secret}");
}

Make sure you add in the project.json as seen in the previous post to enable the required NuGet packages. The Azure function now uses the certificate to authenticate with Key Vault and retrieve the secret.

Hope this helps!

Pocket

I try to stay offline for fixed times during the day and often prepare myself up for it. Having things to read is one of the important things to it. Feedly allows to keep track of all the reading sources that I have, while I am online. Some articles need more time and focus to be well understood, and I often end up ‘Saving them for Later.’

Pocket is an app that helps manage articles that you wish to read later. You can save articles, videos or pretty much anything into Pocket and view them later. The best thing about Pocket is that on mobile devices, it allows offline reading - i.e., without the need for an internet connection.

Pocket has apps and browser extensions for a variety of platforms making it easy to save articles that you find interesting. You can save to pocket while on your laptop or your mobile devices and have it available for later reading. Feedly integrates with Pocket and allows to save articles for future reading straight to Pocket. I am using the free version of Pocketand it works perfectly for me. But if you are interested in more advanced features you can upgrade to the Premium version.

Don’t miss out on that article that you want to read (later), Get Pocket!