Visual Studio (VS) 2017 improves a lot on Code Navigation features. If at all anything ever attracted me to ReSharper it was code navigation (though I have not been using it for a couple of years now). Visual Studio lacked behind in this aspect but not anymore. The new features help improve developer productivity and here are few I found worth looking.

In earlier versions, the navigation features are all over the menu. In VS2017 all the navigation features are available under Edit -> Go To. Though I usually prefer using the keyboard shortcut for opening the navigate dialogs, it’s good to have all under the same menu.

Visual Studio 2017 Code Navigation

Control + T brings up the Go To Dialog box which now has a lot more icons and options compared to the previous VS versions. Typing in ‘?’ lists the different filtering criteria available when querying. It also allows searching across all the criteria by just not specifying the filtering criteria. To search for a file, type in ‘f FileName’. These filters also have direct shortcuts as shown in the image above.

Visual Studio 2017 Go To Shortcuts

CamelCase search is the one that attracts me the most. By entering just the capital letters of the name it quickly filters out the particular class. As shown below by just typing CSG it brings up the class CharSequenceGenerator. Typing in the full class name is no longer required and makes navigation faster.

Visual Studio 2017 Go To Camel Case Matching

Check out the video to see these features and more in action. Hope it helps!

Rescue Time - My dashboard for March,2017

Tracking is essential for measuring progress. Depending on your area of focus tracking tools differ. If it is a time-based activity that you are tracking a simple watch can suffice the need. But this can soon become an overhead. Rescue Time helps track time spent on a computer or mobile devices.

Rescue Time is a personal analytics service that shows you how you spend your time and provides tools to help you be more productive.

Rescue time runs in the background and helps you track the applications and websites that you use. Most of the applications are categorized automatically; However, it also allows manual categorization. Rescue Time lets you edit an activity and assign it to various categories and set productivity levels. So if you are spending more time on an application configured as very productive then your overall productivity pulse is higher for the day.

Rescue Time - Edit Activity

Once you have your time tracked you can then adapt yourself to be more productive. Various reports are provided to visualize the data collected. The premium version offers a lot more features to help improve productivity. I am currently on the Free Plan. Rescue Time helps inspect your current behaviors and patterns at work. Once you have the details, you can understand where your time is spent and improve on it as required. Rescue Time is available for computer and mobile devices. Hope this helps track your time and become more productive!

Oh yes! That is an expected error. It is because…”.

How many times have you given that explanation yourself or heard the other developer tell that? Known errors or exceptions are common in applications and us developers find ways to live with such errors. At times when the number of such errors grows it becomes a problem directly or indirectly to the business. These known errors could either be exceptions in application logs, failed messages (commands/events) in a message based architecture, alert popups in Windows client applications, etc.

Known Errors

We should try and keep known errors and exception count close to zero. Below are some of the problems that can happen by ignoring it over a period.

Business Value

Since the errors are known to us, we train ourselves or even the users to ignore them. It is easy to justify that fixing them does not have any business value as there is no direct impact. This assumption need not be true. If a piece of code has no value then why is it there in the first place? Possibly it is not having any visible effects at present but might be having an impact at a later point in time. It could also be that it is not affecting the data consistency of your system, but is a problem for an external system. There can be business flows that are written at a later point of time not being aware of this known error. Some developer time gets lost when glancing over such errors or messages in the log which directly equates to money for the business.

Important Errors Missed

If there are a lot of such known errors, it is easy for new or important ones to get missed or ignored. Depending on the frequency of the known error, it can end up flooding the logs. The logs start to get overwhelming to monitor or trace for other issues with lots of such known errors. The natural tendency for people when they find something overwhelming is to ignore it. I worked on a system which had over 250 failed messages coming to the error queue daily. It was overwhelming to monitor them and was soon getting ignored. Important errors were getting missed and often ended up as support requests for the application. Such errors otherwise could have been proactively handled, giving the end user more confidence.

Lower Perceived Stability

The overall perceived stability of the system comes down as more and more such errors happen. It is applicable both for the users and developers. When errors no longer get monitored or tracked, critical errors gets ignored. Users have to resort to other means like support requests for the errors they face. For users who are new to the system, it might take a while to get used to the known errors. These errors decrease the trust they have in the system and soon starts suspecting everything as an issue or a problem.

Seeing more and more of such errors does not leave a positive impact on the developers. It’s possible that developers loose interest to work on an unstable system and start looking for a change. It is also a challenge when new members join the team. It takes time for them to get used to errors and exceptions and to learn to ignore them.

Stereotyping Exceptions

Errors of a particular type can get stereotyped together, and get ignored mistaking it for one that is already known. It is easy for different ‘object null reference exception’ error messages to be treated as a particular error whereas it could be failing for various reasons. At one of my clients, we had a specific message type failing with the null reference error. We had identified the reason for one such message and found that it is not causing ‘any direct business impact’ and can be ignored. The message was failing as one of the properties on the message was alphanumeric while the code expected numeric. The simple fix in the code would be to validate it, but since this was not causing any business impact it was ignored, and messages of that type kept piling up. Until later where we found that there were other message formats of the same message type failing which was for a different reason. And those messages were causing a loss of revenue to the business. But since we were stereotyping the error messages of the particular type to the one that we found invalid and not having a business impact all of such messages were ignored. The stereotyping resulted in the important message getting ignored.

Maintaining a Known Bugs Database

When having a large number of such errors, it is important to document a list of such errors.It forces us to a new document and also comes with the responsibility of maintaining it. For any new developers or users joining the system, they need to go through the documentation to verify if it is a known error or not. Internalizing these errors might take some time, and critical errors can get missed during this time. Any such document needs to be kept current and up to date as and when new errors are found or more details found for older ones. This is not the best of places where a developers time is spent.

Count Keeps Increasing

If the count of such errors is not monitored and not valued for the probability of the number of error messages increasing is higher. New errors getting introduced will not be noticed, and even when noticed it becomes acceptable. We already have a lot of them, so it is fine. It sets a wrong goal for the team and can soon become unmanageable.

New Business Flow Assuming Exceptions

Since the exceptions are so used to, it is highly possible that we set that as an expectation. New business flows come up expecting a certain kind of exception to be thrown or assuming a particular type of message will not get processed. Since we are so used to the fact that it happens, we take it for granted and start coding against it. It might be the last thing that happens on a project, but believe me, it happens!. Such code becomes harder to maintain and might not work once the actual exception gets fixed.

Ignoring exceptions and getting around to live with it can be more costly over a longer period. The further we delay action on such errors the higher the cost involved. Even though there is no immediate or direct business value seen from fixing such errors, we saw that on a longer run it could have a great impact. So try not to live with such errors but instead prioritize them with the work your team is doing and get it fixed. A fix might not always be an extra null check or a conditional to avoid the error. It might seem the easier approach to reducing the errors but will soon become a different problem. Understand the business and explore into what is causing the error. Do you have any known exceptions in the application you are working? What are you doing about it?

Tomighty, Pomodoro Timer

Over the past couple of weeks, I have been trying to improve my focus while working. With running (3 * 1.5 hours a week) and bodyweight training (3 * 30 minutes a week) taking a significant part of my morning routine, I have less time for blogging, learning, and videos. Though I have known The Pomodoro Technique for a long time, I never practiced it regularly. With less time and more things to get done, I badly had to do something to get back on track with everything and thought of giving it a try.

The Pomodoro Technique is a time management technique that uses a timer to break down work into intervals, traditionally 25 minutes in length, separated by short breaks. These intervals are named Pomodoro

Initially, I was looking at apps that can integrate with Todoist, my task management tool. There are a lot of pomodoro apps that integrate with Todoist, but I found all of them an overkill. Tomighty is a simple Pomodoro timer that just tracks time and settings for the Pomodoro interval and long and short breaks. It hides away well in the Notification area of the taskbar and shows the amount of time left in the current interval. It plays sounds when an interval starts and ends. You can interrupt a Pomodoro session and restart it if required. That is all that you need from a timer to keep up with the Pomodoro technique.

If you are on a high DPI machine running Windows, the UI might not scale well. There is a workaround for this.

Sticking to the Pomodoro Technique has been working well for me, and I am able to focus better on the task at hand. I am still exploring the technique and trying to improve on it. Do you use Pomodoro Technique? If you are new to Pomodoro Technique and want to learn more check out the book, The Pomodoro Technique, by Francesco Cirillo, the creator of the technique.

Last month I attended the Professional Scrum Master (PSM) training. Richard Banks was the training instructor who also happens to be my colleague at Readify. It is a two days training and was a great experience for me. It changed my perception towards Scrum. Before the training, I was not that fascinated about Scrum and often argued against it citing that it’s development practices that we need to uphold more and not Scrum. After the training, I have still not changed on the latter part but have changed my perception towards Scrum. Here are some of the key takeaways for me from the training.

TLDR;

  • Stick to the Process
  • Make Daily Scrum (a.k.a Standup) effective by setting mini goals
  • Understand the different Roles and imbibe the responsibilities
  • Have a well-defined Definition for Done
  • Deliver value in every sprint
  • Foster team collaboration and openness
  • Scrum is not the Silver Bullet. Other development practices must be followed in parallel.

If you are completely new to Scrum or not so familiar with the different terms used I recommend that you read The Scrum Guide. Don’t refrain from clicking that link thinking it is a long book. It is a short one, and you can finish it in less than an hour. It is an hour well spent!

Scrum Framework

Sticking to the Process

Everyone who was attending the training was already using Scrum, but.. This is one of the common things with mostly everything and specifically with Scrum. People take parts of it and change the system to suit their needs. This is fine as long as you have tried the original process long enough to understand the pros and cons of it.

Tweaking the process to suit existing practices as soon as you start with a new process is just resistance to change

Like Uncle Bob Martin says in clean code, dedicate yourself to the process and follow it religiously until you have mastered the process. You can start modifying the process or finding new ways once you have reached that state. But till then follow it religiously if you want to see any benefit.

Martial artists do not all agree about the best martial art, or the best technique within a martial art. Often master martial artists will form their own schools of thought and gather students to learn from them. So we see Gracie Jiu Jistu, founded and taught by the Gracie family in Brazil. We see Hakkoryu Jiu Jistu, founded and taught by Okuyama Ryuho in Tokyo. We see Jeet Kune Do, founded and taught by Bruce Lee in the United States.

Students of these approaches immerse themselves in the teachings of the founder.They dedicate themselves to learn what that particular master teaches, often to the exclusion of any other master’s teaching. Later, as the students grow in their art, they may become the student of a different master so they can broaden their knowledge and practice. Some eventually go on to refine their skills, discovering new techniques and founding their own schools. None of these different schools is absolutely right. Yet within a particular school we act as though the teachings and techniques are right. After all, there is a right way to practice Hakkoryu Jiu Jitsu, or Jeet Kune Do. But this rightness within a school does not invalidate the teachings of a different school.

-Uncle Bob Martin, Clean Code

Effectiveness of Daily Scrum

Daily Scrum or otherwise popular as Standup meetings are not about standing up to give status updates. The intent of the Daily Scrum is for the development team to get together and work out whatever is required to get closer to the goal set for the sprint. Standing up improves your activity levels and interaction levels. So it’s recommended to stand up. The primary intent of the fifteen minutes time-boxed event (it can take no more than the allotted time) is to look at how we tracked on with our ‘mini goals’ from last daily scrum and setting new ‘mini goals’ till next stand up. Also, capture any impediments or blockers and make sure that there are assigned people following up on it. The daily scrum necessarily needs not be the first thing in the morning but is usually preferred. Currently, I am on an evening 4 pm daily scrum.

Roles and Their Importance

Understanding of the different roles in Scrum is necessary. A scrum team consists of

  • The Product Owner
  • Scrum Master
  • Development team

Scrum Teams are self-organizing and cross-functional. Self-organizing teams choose how best to accomplish their work, rather than being directed by others outside the team. Cross-functional teams have all competencies needed to accomplish the work without depending on others not part of the team. The team model in Scrum is designed to optimize flexibility, creativity, and productivity.

Most important is to understand the responsibilities of each role and adhere to it as much as possible. Remember Sticking to the Process is important. Each of the roles must have the courage and openness to act according to their role responsibilities.

Definition of Done

Having a prior agreed on Definition of Done is important to decide when a piece of work is done. Having a common understanding of the definition of done ensures that we do not ship half-baked features. It also provides a guide for non-functional features, which often gets sidelined or missed and later becomes a show-stopper.

As Scrum Teams mature, it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality. Any one product or system should have a definition of “Done” that is a standard for any work done on it.

Delivering Value

One of the golden rules is that every sprint must produce an increment that adds some business value. It need not be released to production, but it must be production ready. The Product Owner makes release decisions.

The Increment is the sum of all the Product Backlog items completed during a Sprint and the value of the increments of all previous Sprints

This hints that having sprints solely for handling Technical debts, Refactoring, Project setup, Infrastructure setup, etc. are not recommended.

Team collaboration and Effectiveness

The process heavily depends on the people encompassing The Scrum Team. So it is important to have a good rapport amongst the team members. The team members should be comfortable and open with each other. The team needs to be self-organizing. When starting with Scrum, this might not be the case, and the team must recognize this and work towards self-organizing. The Scrum Master must encourage and guide the team to reach that state of independence.

When the values of commitment, courage, focus, openness, and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone.

Scrum Values

The Scrum Guide calls out the above five core values that a team should demonstrate for the effectiveness of the framework. The Scrum team should constantly try to demonstrate these core values and try to improve on this.

Not the Silver Bullet for Development

There’s a mess I’ve heard about with quite a few projects recently. It works out like this:

  • They want to use an agile process, and pick Scrum
  • They adopt the Scrum practices, and maybe even the principles
  • After a while progress is slow because the code base is a mess

-Martin Fowler, Flaccid Scrum

If you are looking up to Scrum to solve the problems with the quality of code you are delivering, you are with the wrong process. Scrum does not talk anything specific about software development and related practices. Extreme Programming is one process that is specific to coding and software development and is also a type of agile software development. It talks about the various development practices lays down strict rules related to software development. Again if you decide to follow this do that religiously - Sticking to the Process matters. Picking up only certain aspects of different school of thoughts might not provide the desired output always. Code Coverage if one such practice that is picked up in isolation and seen to produce not much value.

Given these new insights, I now feel that following a proven process framework, like Scrum will help a team achieve its goals. The core values that Scrum lays down is important for any team. From my personal experience, the reason why I used to look down upon Scrum is that it was not practiced the way it was laid out. Like in software development premature optimization of the process is likely to give less value. The training provided a lot of new insights and helped me to have a better understanding of the Scrum process framework. I did the PSM I and PSPO I (I had attended this training last year but never took the test until now) assessment and got certified. Are you using Scrum, what are your experiences?

While working on large codebases, I want my Solution Explorer to be synchronized with the current working file. With the solution explorer in sync, it makes navigating to other related files, adding new classes in the same location, renaming files, etc. faster.

Track Active Item in Solution Explorer, Visual Studio

The setting to keep the items in sync is configurable in Visual Studio and is turned off by default. You can enable this by checking the ‘Track Active Item in Solution Explorer’ under Options -> Projects and Solutions -> General. You can navigate there quickly using Visual Studio Quick Launch (Ctrl + Q). Just type ‘Track active’ and you will get the quick link to the setting. Keep it checked, and off you go, the solution explorer and the current file will be in sync.

PS: Visual Studio 2017 is now available.Get it if you have not already!

Populating data for tests is the section of the test that usually ends up making tests more coupled with the code that it is testing. Coupling makes tests more fragile and refactoring code harder because of breaking tests. We should try to avoid coupling with the implementation details when writing tests. Let us see a few options that we have to populate test data and constructing object graphs (chain of objects branched off from the root object). I use xUnit.net as my test framework, but you can use these techniques in your choice of framework.

Populating Test Data

Let’s start with some simple tests on a Customer class shown below.

1
2
3
4
5
6
7
8
9
10
11
12
public class Customer
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string FullName
    {
        get { return FirstName + " " + LastName; }
    }

    public Address Address { get; set; }
}

Let’s say we need to test that the FullName property returns as expected. We will use a Theory type tests for testing different combinations of first and last name. xUnit.net includes support for two different major types of unit tests: facts and theories

Facts are tests which are always true. They test invariant conditions.

Theories are tests which are only true for a particular set of data.

Theories allow us to create parameterized tests with which we can run a given test with different parameter options. Like in this example we need to test the Customer class with different set of First and Last Name combinations. As you can see below the test is attributed with Theory Attribute, and we use the InlineData attribute to pass static values to the test. Using these parameters we are now able to test for different combinations of first and last names. The test populates only the required properties on Customer object for testing FullName.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Theory]
[InlineData("Adobe", "Photoshop", "Adobe Photoshop")]
[InlineData("Visual", "Studio", "Visual Studio")]
[InlineData("Rode", "Podcaster", "Rode Podcaster")]
public void CustomerFullNameReturnsExpected(string firstName, string lastName, string expected)
{
    // Fixture setup
    var customer = new Customer() { FirstName = firstName, LastName = lastName };
    // Exercise system
    var actual = customer.FullName;
    // Verify outcome
    Assert.Equal(expected, actual);
    // Teardown
}

Tests help refine the public API as they are the first consumers

The tests above acts as a clue indicating that the three properties - FirstName, LastName, FullName are related and go hand-in-hand. These tests are a strong indication that these properties can be grouped together into a class and possibly tested separately. We can extract these properties into a Value Object for e.g. Name. I will not go into the implementation details of that, and I hope you can do that you own.

The above tests still have a high dependency on the code that it is testing - the constructor. Imagine if we had a lot of such tests that constructs the Consumer class inline in the setup phase. All tests will break if the class constructor changes. We saw in the refactoring to remove constructor dependency how to remove such dependencies and make the tests independent of the constructor dependencies. We can introduce Object Mother or Test Data Builder pattern as mentioned in the article. Optimizing further we can also use AutoFixture to generate test data. Moving into these patterns or AutoFixture brings in an added benefit as well; the rest of properties on the Customer class also gets populated by default.

Explicitly Setting Properties

By introducing AutoFixture, we no longer need to create the Customer object explicitly. We can use the Fixture class generate a Customer class for us. Using AutoFixture, this can be achieved in at least two ways (I am not sure if there are more ways of doing this).

Using Fixture class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Theory]
[InlineData("Adobe", "Photoshop", "Adobe Photoshop")]
[InlineData("Visual", "Studio", "Visual Studio")]
[InlineData("Rode", "Podcaster", "Rode Podcaster")]
public void CustomerFullNameReturnsExpected(string firstName, string lastName, string expected)
{
    // Fixture setup
    var fixture = new Fixture();
    var customer = fixture.Build<Customer>()
        .With(a => a.FirstName, firstName)
        .With(a => a.LastName, lastName)
        .Create();
    // Exercise system
    var actual = customer.FullName;
    // Verify outcome
    Assert.Equal(expected, actual);
    // Teardown
}
Using Injected Object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[Theory]
[InlineAutoData("Adobe", "Photoshop", "Adobe Photoshop")]
[InlineAutoData("Visual", "Studio", "Visual Studio")]
[InlineAutoData("Rode", "Podcaster", "Rode Podcaster")]
public void CustomerFullNameReturnsExpected(string firstName, string lastName, string expected, Customer customer)
{
    // Fixture setup
    customer.FirstName = firstName;
    customer.LastName = lastName;
    // Exercise system
    var actual = customer.FullName;
    // Verify outcome
    Assert.Equal(expected, actual);
    // Teardown
}

In both cases, we explicitly set the required properties. The above test is similar to the previous test that we wrote without AutoFixture. But no longer are we dependent on the constructor. In the second way of using AutoFixture I used InlineAutoData attribute, that is part of Ploeh.AutoFixture.Xunit2. This attribute automatically does the fixture initialization and injects the Customer object for us. For all the values that it can match from the inline parameter list, it uses the provided values. It starts generating random values once all the parameters passed inline are used. In this case, only Customer object is created by AutoFixture.

AutoFixture and Immutable types

When using immutable types or properties with private setters, we cannot set the property value after it is created.

AutoFixture was originally build as a tool for Test-Driven Development (TDD), and TDD is all about feedback. In the spirit of GOOS, you should listen to your tests. If the tests are hard to write, you should consider your API design. AutoFixture tends to amplify that sort of feedback.

-Mark Seemann (creator of AutoFixture)

In these cases, the suggested approach is something closer to the manual Test Data Builder we saw in the refactoring example. We can either have an explicit test data builder class or define extension methods on the immutable type such that it changes just the specified property and returns all other values same, as shown below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class Name
{
    public readonly string FirstName;
    public readonly string LastName;
    public string FullName
    {
        get
        {
            return FirstName + " " + LastName;
        }
    }

    public Name(string firstName, string lastName)
    {
        // Enforce parameter constraints
        FirstName = firstName;
        LastName = lastName;
    }

    public Name WithFirstName(string firstName)
    {
        return new Name(firstName, this.LastName);
    }
}

As shown the WithFirstName method returns a new Name class with just the first name changed. Again we do not need these WithXXX methods for all the properties. Only when there is a need to change any of the property values as part of the requirement do we need to introduce such methods and even test it. This again drives to the above point of using tests to guide the API design, from the feedback.

Customization

In cases where we have validations in constructor to hold the class constraints, we cannot rely on the random values generated by AutoFixture. For example. - The string should be at least ten characters in length for a Name class - Start date should be less than the End date for a date range class

Without any custom code if we are to rely on AutoFixture to generate us, such classes, the tests will not be predictable. Depending on the random value that AutoFixture generates it might create a valid instance or throw an exception. To make this consistent, we can add Customization to ensure predictability.

For the DateRange class below we can add the following Customization.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class DateRange
{
    public readonly DateTime EndDate;
    public readonly DateTime StartDate;

    public DateRange(DateTime startDate, DateTime endDate)
    {
        if (endDate < startDate)
            throw new Exception("End date cannot be less than the start date");

        StartDate = startDate;
        EndDate = endDate;
    }
}
DateRange Customization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class DateRangeCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        fixture.Customizations.Add(new DateRangeSpecimenBuilder());
    }
}

public class DateRangeSpecimenBuilder : ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var requestAsType = request as Type;
        if (typeof(DateRange).Equals(requestAsType))
        {
            var times = context.CreateMany<DateTime>();
            return new DateRange(times.Min(), times.Max());
        }

        return new NoSpecimen();
    }
}

The customization gets invoked every time a DateRange object is requested using the fixture. It then invokes this custom code that we have added in and creates a valid DateRange object. For the tests use the customization as part of the fixture either using a custom data attribute or explicitly adding the customization into the Fixture class.

Mocking behavior

Mock Objects is a popular way to unit test classes in isolation. For the external dependencies that a System Under Test (SUT) has, the dependencies are mocked using a mocking framework. In these cases, we can setup the external dependencies to return different values as we expect for different tests and test the logic of the SUT and how it responds. Such tests are usually more coupled with the implementation as we have to setup the mocks prior. So we need to have an understanding of the return values expected from dependencies and the parameters expected by the dependencies. I use Moq framework for mocking, and AutoFixture has a library that helps integrate well with it.

1
2
3
4
5
6
7
8
9
public HttpResponseMessage Get(Guid id)
{
    var customer = CustomerRepository.Get(id);

    if (customer == null)
        return Request.CreateResponse(HttpStatusCode.NotFound, "Customer not Found with id " + id);

    return Request.CreateResponse(HttpStatusCode.OK, customer);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Theory]
[InlineAutoMoqData]
public void CustomerControllerGetWithNoCustomerReturnsNotFound(
    Guid customerId,
    [Frozen]Mock<ICustomerRepository> customerRepository,
    CustomerController sut)
{
    // Fixture setup
    customerRepository.Setup(a => a.Get(customerId)).Returns(null);
    var expected = HttpStatusCode.NotFound;

    // Exercise system
    var actual = sut.Get(customerId).StatusCode;

    // Verify outcome
    Assert.Equal(expected, actual);
}

The tests above uses InlineAutoMoqData attribute which is a customized xUnit data attribute that uses Moq framework to inject dependencies. The Mock represents a mocked interface implementation. Behavior is setup on the mock using the Setup method. By using Frozen attribute for the Mock parameter, we tell AutoFixture to create only one instance of the mocked object and then use the same instance for any future requests of that type. This forces the same instance of the repository to be injected into the CustomerController class as well when it asks for a ICustomerRepository to AutoFixture.

Creating test data is an important aspect of any test. Making sure that you minimize the dependencies on the implementation detail is important to make your tests more robust. This allows the code to be refactored as long as some of the core contracts that we are testing remain the same. AutoFixture helps minimize the code in Fixture Setup phase, which otherwise tends to grow bigger. Hope this helps you with your tests!

Azure Powershell Get-AzureRmContext

When accessing Key Vault using Powershell it can be a bit tricky when you have multiple subscriptions under the same account. The Key Vault cmdlets being under the Resource Manager (RM) mode depends on the current RM Subscription. The Key Vault cmdlets enable you to manage only the key vaults under the selected subscription. To access the key vaults in other subscriptions, you need to switch the selected RM subscription.

Use Select-AzureRmSubscription to switch the selected RM subscription

The Get-AzureRmContext returns the metadata used for RM requests. The SubscriptionId/SubscriptionName property indicates the selected subscription. Any Key Vault cmdlets (or RM cmdlets) will work based off that selected subscription. To change the selected Azure RM subscription use the Select-AzureRmSubscription cmdlet. Pass in the SubscriptionId or the Subscription Name that you wish to switch to and the RM Subscription will be set to that. To get the SubscriptionId/SubscriptionName of the subscriptions under your account use Get-AzureSubscription cmdlet.

 Get-AzureRmContext
 Get-AzureRmSubscription
 Select-AzureRmSubscription -SubscriptionName  "Your Subscription Name"
 Select-AzureRmSubscription -SubscriptionId  a5287dad-d5a2-4060-81bc-4a06c7087e72

I struggled with this for some time, so hope it helps you!

Security

In one of my earlier posts, PFX Certificate in Azure Key Vault, we saw how to save PFX Certificate files in Key Vault as Secrets. Azure Key Vault now supports certificates as a first class citizen. This means one can manage certificates as a separate entity in KeyVault. At the time of writing, Key Vault supports managing certificates using Powershell. The portal UI is still to catch up on this feature. Using the Key Vault’s certificate feature, we can create a new certificate: self-signed or signed by a supported certificate authority, import an existing certificate, retrieve the certificate with or without a private key part.

Setting up the Vault

With the introduction of the certificates feature, a new command line switch is added to Set-AzureRmKeyVaultAccessPolicy cmdlet -PermissionToCertificates. It supports the following values - all, get, create, delete, import, list, update, deleteissuers, getissuers, listissuers, setissuers, managecontacts. For a key vault created after the introduction of this feature, the property is set to all for the creator’s access policy. For any vault created before the introduction of the feature, this property needs to be explicitly set to start using it.

Create Certificate

To create a new certificate in the vault use the Add-AzureKeyVaultCertificate cmdlet. The cmdlet requires a Certificate Policy that specifies the subject name, issuer name, validity, etc.

1
2
$certificatepolicy = New-AzureKeyVaultCertificatePolicy   -SubjectName "CN=www.rahulpnath.com"   -IssuerName Self   -ValidityInMonths 12
Add-AzureKeyVaultCertificate -VaultName "VaultFromCode" -Name "TestCertificate" -CertificatePolicy $certificatepolicy

Executing the above creates a certificate in the vault with the given name. To retrieve the certificates in the key vault use the. The certificate object identifier is similar to that of Keys and Secrets as shown below. This identifier is used to identify a certificate uniquely.

1
https://vaultfromcode.vault.azure.net:443/certificates/TestCertificate

To retrieve all the certificates in a vault use the Get-AzureKeyVaultCertificate cmdlet passing in the VaultName. To get details of a certificate pass in the Certificate Name as well.

Azure Key Vault, GetAzureKeyVaultCertificate

When creating a new certificate make sure that a Key or Secret does not exist with the same name in the vault. Azure adds in a key and secret with the same name as that of the certificate when creating a new certificate as shown in the above image. The key is required when for certificates created with non-exportable key (-KeyNotExportable). Non-exportable certificates do not have the private portion contained in secret. Any certificate operation requiring the private part should use the key. For consistency, the key exists for exportable certificates as well.

To import an existing certificate into the key vault, we can use Import-AzureKeyVaultCertificate cmdlet. The certificate file should be either PFX or PEM format.

Recreate Certificate Locally from Key Vault

Often we will have to recreate the certificate on the machine where the application using it is running. To create the private portion of the certificate retrieve it from the Secret, load it into a certificate collection, export and save the file locally.

1
2
3
4
5
6
7
$kvSecret = Get-AzureKeyVaultSecret -VaultName 'VaultFromCode' -Name 'TestCertificate'
$kvSecretBytes = [System.Convert]::FromBase64String($kvSecret.SecretValueText)
$certCollection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$certCollection.Import($kvSecretBytes,$null,[System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)
$protectedCertificateBytes = $certCollection.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, 'test')
$pfxPath = 'C:\cert\test.pfx'
[System.IO.File]::WriteAllBytes($pfxPath, $protectedCertificateBytes)

Similarly to export the public portion of the certificate

1
2
3
4
$cert = Get-AzureKeyVaultCertificate -VaultName 'VaultFromCode' -Name 'TestCertificate'
$filePath ='C:\cert\TestCertificate.cer'
$certBytes = $cert.Certificate.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Cert)
[System.IO.File]::WriteAllBytes($filePath, $certBytes)

Delete Certificate

To delete a certificate use the Remove-AzureKeyVaultCertificate cmdlet and pass in the vault name and certificate name.

1
Remove-AzureKeyVaultCertificate -VaultName 'VaultFromCode' -Name 'TestCertificate'

Hope this helps you to get started with managing certificates in Azure Key Vault.

First off I would like to thank you all who made for the talk on Azure Key Vault at Alt.Net Sydney. I enjoyed giving the session and hope you liked it as well.

Azure Key Vault session, Alt.Net Sydney - Pic by Richard Banks

As a follow up to the talk, I thought of putting up a list of resources that will help you jump start with Azure Key Vault.

Thank you again for attending the talk. For any queries feel free to reach out to the Azure Key Vault MSDN forum or me. Hope this helps you to jump start on using Key Vault in applications you are building currently.