I came across this question in Azure Key Vault forums looking for options to get notified when Key or Secrets in vault nears expiry. It’s useful to know when Object’s (Keys/Secrets) near expiry, to take necessary action. I decided to explore on my proposed solution of having a scheduled custom PowerShell script to notify when a key is about to expire. In this post, we will see how to get all objects nearing expiry and scheduling this using Azure Runbook to run daily.

Getting Expiring Objects

Both Keys and Secrets can be set with an Expiry date. The expiry date can be set when creating the Object or can be set on an existing Object. This can be set from the UI or using PowerShell scripts (setting the -Expires attribute).

Azure Key Vault - Set Key Expiry

Key Vault (at the time of writing) throws an exception when an expired key is accessed over the API. Also, it does not provide any notification whenever a key/secret is about to expire. The last thing you want is your application go down because of an expired object in the vault. With the Get and List access on the vault, we can retrieve all the keys and secrets in the vault and loop through the elements to see objects that are nearing expiry.

The PowerShell script takes the Vault Name, number of days before with alert should be raised and flags to indicate whether all versions of keys/secrets should be checked for expiry. The full script is available here.

1
2
3
4
$VaultName = ''
$IncludeAllKeyVersions = $true
$IncludeAllSecretVersions = $true
$AlertBeforeDays = 3

All keys and secrets are converted into a common object model, which contains just the Identifier, Name, Version and the Expiry Date if it has one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Function New-KeyVaultObject
{
    param
    (
        [string]$Id,
        [string]$Name,
        [string]$Version,
        [System.Nullable[DateTime]]$Expires
    )

    $server = New-Object -TypeName PSObject
    $server | Add-Member -MemberType NoteProperty -Name Id -Value $Id
    $server | Add-Member -MemberType NoteProperty -Name Name -Value $Name
    $server | Add-Member -MemberType NoteProperty -Name Version -Value $Version
    $server | Add-Member -MemberType NoteProperty -Name Expires -Value $Expires

    return $server
}

Depending on the flag set for retrieving all key/secret version, it fetches objects from the vault and returns in the common object model above.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
function Get-AzureKeyVaultObjectKeys
{
  param
  (
   [string]$VaultName,
   [bool]$IncludeAllVersions
  )

  $vaultObjects = [System.Collections.ArrayList]@()
  $allKeys = Get-AzureKeyVaultKey -VaultName $VaultName
  foreach ($key in $allKeys) {
    if($IncludeAllVersions){
     $allSecretVersion = Get-AzureKeyVaultKey -VaultName $VaultName -IncludeVersions -Name $key.Name
     foreach($key in $allSecretVersion){
         $vaultObject = New-KeyVaultObject -Id $key.Id -Name $key.Name -Version $key.Version -Expires $key.Expires
         $vaultObjects.Add($vaultObject)
     }

    } else {
      $vaultObject = New-KeyVaultObject -Id $key.Id -Name $key.Name -Version $key.Version -Expires $key.Expires
      $vaultObjects.Add($vaultObject)
    }
  }

  return $vaultObjects
}

function Get-AzureKeyVaultObjectSecrets
{
  param
  (
   [string]$VaultName,
   [bool]$IncludeAllVersions
  )

  $vaultObjects = [System.Collections.ArrayList]@()
  $allSecrets = Get-AzureKeyVaultSecret -VaultName $VaultName
  foreach ($secret in $allSecrets) {
    if($IncludeAllVersions){
     $allSecretVersion = Get-AzureKeyVaultSecret -VaultName $VaultName -IncludeVersions -Name $secret.Name
     foreach($secret in $allSecretVersion){
         $vaultObject = New-KeyVaultObject -Id $secret.Id -Name $secret.Name -Version $secret.Version -Expires $secret.Expires
         $vaultObjects.Add($vaultObject)
     }

    } else {
      $vaultObject = New-KeyVaultObject -Id $secret.Id -Name $secret.Name -Version $secret.Version -Expires $secret.Expires
      $vaultObjects.Add($vaultObject)
    }
  }

  return $vaultObjects
}

Now that we have all the keys and secrets we want to check for expiry all we need to know is if there are any keys that are expiring in the upcoming days.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$allKeyVaultObjects = [System.Collections.ArrayList]@()
$allKeyVaultObjects.AddRange((Get-AzureKeyVaultObjectKeys -VaultName $VaultName -IncludeAllVersions $IncludeAllKeyVersions))
$allKeyVaultObjects.AddRange((Get-AzureKeyVaultObjectSecrets -VaultName $VaultName -IncludeAllVersions $IncludeAllSecretVersions))

# Get expired Objects
$today = (Get-Date).Date
$expiredKeyVaultObjects = [System.Collections.ArrayList]@()
foreach($vaultObject in $allKeyVaultObjects){
if($vaultObject.Expires -and $vaultObject.Expires.AddDays(-$AlertBeforeDays).Date -lt $today)
{
  # add to expiry list
  $expiredKeyVaultObjects.Add($vaultObject) | Out-Null
  Write-Output "Expiring" $vaultObject.Id

}

# Pass to Alerter $expiredKeyVaultObjects
}

Scheduling Expiry Notification using Azure Runbook

You can either run this manually every time you want to get a list of objects that are expired or nearing expiry. Alternatively, you can set up a scheduled task to run the script at a set frequency. Since you are already on Azure, you can try Azure Automation and schedule the task for you. A Runbook in Azure Automation account can be granted access to the key vault. The Automation account should have ‘Run as account’ setup and the service principal created for it can be used to assign Access Policies to access the vault. Check out the Accessing Azure Key Vault From Azure Runbook post for a step by step instruction on how to setup runbook to access key vault. You can then schedule the runbook to execute at fixed time intervals. Feel free to modify the script to send email notifications or push notification or any other that matches your need.

Generally, it is a good practice to rotate your keys and secrets once in a while. Use the expiry attribute to set the expiry and force yourself to keep your sensitive configurations fresh. It’s likely that such a notification feature will be built into the key vault. But till then Hope this helps to keep track of keys or secrets that are nearing expiry within the vault and take the necessary action to renew them. The full script is available here.

Azure Automation is a new service in Azure that allows you to automate your Azure management tasks and to orchestrate actions across external systems from right within Azure. If you are new to Azure Automation, get started here. Runbooks live within the Azure Automation account and can execute PowerShell scripts. In this post, I will walk you through on how to use Key Vault from an Azure Automation Runbook.

To create a Runbook go to ‘Add a Runbook’ under Automation Account, Runbooks as shown in the image below. Once created you can author your PowerShell script there.

Azure Automation Create a Runbook

In this example I will get all the Keys from an existing key vault, using the Get-AzureKeyVaultKey cmdlet. This returns all the key for the given key vault.

1
Get-AzureKeyVaultKey -VaultName YoutubeVault

If we are to run this script now, it will fail for many reasons - we do not have the key vault cmdlets imported in the runbook nor have we given access to the automation account to access the keys in the vault. Running it gives me the below error.

Get-AzureKeyVaultKey : The term ‘Get-AzureKeyVaultKey’ is not recognized as the name of a cmdlet, function script, file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again .

Under ‘Assets’ from the Azure Automation account Resources section select ‘Modules’ (as shown in the image below) to add in Modules to the runbook. To execute key vault cmdlets in the runbook, we need to add AzureRM.profile and AzureRM.key vault. Search for this under ‘Browse Gallery’ and import.

Azure Runbook Add KeyVault Module

To give Runbook access to the keys in the vault, it needs to be specified in the Access Policies of the key vault. The ‘Run As Accounts’ feature will create a new service principal user in Azure Active Directory and assign the Contributor role to this user at the subscription. The ‘Application ID’ from creating the run as account is used to assign Access Policies for the key vault. In this example, I give the ‘list’ and ‘get’ PermissionToKeys.

Azure Automation Runbook, set run as account

You can use the sample code below, taken from the AzureAutomationTutorialScript example runbook, to authenticate using the Run As account to manage Resource Manager resources with your runbooks. The AzureRunAsConnection is a connection asset automatically created when we created ‘run as accounts’ above. This can be found under Assets -> Connections. After the authentication code, I run the same code above to get all the keys from the vault.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$connectionName = "AzureRunAsConnection"
try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

    "Logging in to Azure..."
    Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } else{
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}

Get-AzureKeyVaultKey -VaultName YoutubeVault

On execution, the Runbook can connect to the key vault and retrieve all the keys. Based on the permissions set on the vault you can perform different actions on the key vault. This helps in automating a lot of tasks that otherwise needs to be done manually. Hope this helps you connect Runbooks with Key Vault.

Digital Signature is a mechanism to ensure the validity of a digital document or message. Digital signatures use asymmetric cryptography - uses a public and private key pair.

A valid digital signature gives a recipient reason to believe that the message was created by a known sender (authentication), that the sender cannot deny having sent the message (non-repudiation), and that the message was not altered in transit (integrity)

The below diagram shows the overview of the different steps involved in the Digital Signature process. We generate a hash of the data that we need to protect and encrypt the hash value using a private key pair. This signed hash is sent along with the original data. The receiver generates a hash of the data received and also decrypts the attached signature using the public key. If both the hashes are same, the signature is valid, and the document has not been tampered with.

Azure Key Vault - Verify Signature Offline

Azure Key Vault supports sign and verify operations and can be used to implement Digital Signatures. In this post, we will explore how to sign and verify a message using Key Vault. Verifying the hash locally is the recommended approach as per the documentation and we will explore how this can be achieved.

Verification of signed hashes is supported as a convenience operation for applications that may not have access to [public] key material; it is recommended that, for best application performance, verify operations are performed locally.

Signing Data

Sign and Verify operations on Key Vault are allowed only on hashed data. So the application calling these API methods should locally hash the data before invoking the method. The algorithm property value passed to the Key Vault Client API depends on the hashing algorithm used to hash the data. Below are the supported algorithms.

  • RS256: RSASSA-PKCS-v1_5 using SHA-256. The application supplied digest value must be computed using SHA-256 and must be 32 bytes in length.
  • RS384: RSASSA-PKCS-v1_5 using SHA-384. The application supplied digest value must be computed using SHA-384 and must be 48 bytes in length.
  • RS512: RSASSA-PKCS-v1_5 using SHA-512. The application supplied digest value must be computed using SHA-512 and must be 64 bytes in length.
  • RSNULL: See [RFC2437], a specialized use-case to enable certain TLS scenarios.

The below code sample uses SHA-256 hashing algorithm to hash and sign the data.

1
2
3
4
5
6
var textToEncrypt = "This is a test message";
var byteData = Encoding.Unicode.GetBytes(textToEncrypt);
var hasher = new SHA256CryptoServiceProvider();
var digest = hasher.ComputeHash(byteData);
var signedResult = await keyVaultClient
    .SignAsync(keyIdentifier, JsonWebKeySignatureAlgorithm.RS256, digest);

Verify Online

To verify a signature online, the keyVaultClient supports a Verify method. It takes the key identifier, algorithm, digest and signature to verify if the signature is valid for the given digest.

1
2
var isVerified = await keyVaultClient
    .VerifyAsync(keyIdentifier, JsonWebKeySignatureAlgorithm.RS256, digest, signedResult.Result);

Verify Offline

To Verify offline, we need access to the public portion of the key used to sign the data. The client application that needs to verify signatures can connect to the vault and get the key details or use a public key shared out of band. The AD application used to authenticate with the key vault should have Get access for retrieving the public key from the vault. Get access can be set using the PermissionToKeys switch when registering the AD application with the key vault. Assuming we have access to the public key as a JSON string, we can use the RSACryptoServiceProvider to verify the signature offline.

1
2
3
4
5
var key = JsonConvert.DeserializeObject<JsonWebKey>(jsonWebKey);
var rsa = new RSACryptoServiceProvider();
var p = new RSAParameters() { Modulus = key.N, Exponent = key.E };
rsa.ImportParameters(p);
isVerified = rsa.VerifyHash(digest, "Sha256", signedResult.Result);

The signature verification succeeds if the message and the signature were not tampered. If either of message or signature were modified then the signature validation fails.

You can get the sample code here. Hope this helps you to implement Digital Signatures using Key Vault.

The current state of the IT industry demands one to be constantly on the go and learning something new. New technologies, frameworks, languages etc. are getting released almost every other day. It is easy to get overwhelmed with all this information. The best way to keep up is to ignore most of it. But every now and then you would want to learn something new either because of project demands or out of personal interests. It is important to have a learning plan for yourself and stick through with it when wanting to learn something. If not it’s easy to get distracted. These are some of the ways how I try to structure my learning.

Make writing a habit - write daily

Choosing and Sticking to a Topic

The most important thing in learning something new is to decide on what to learn. This could either be driven by work needs or personal interests. It’s easier when driven by work needs as you do not have much to choose on. But still, you need to make sure that you stay within the boundaries of what you are trying to learn. It’s easy to get off track as there might be a lot of things that come up which you are not aware of. At times understanding things with a certain level of abstraction is important. Just like social networks distracting you off from real work, different terminologies can get you off on a different path. So make sure you always stick to your end goal.

Books, Blogs, Videos

There are different ways that you can learn something new. It could either be books, videos, podcasts, blogs etc. There is no such thing as the best mode to learn. Mode of learning is person specific. So don’t try to just imitate what your friend does. Try to find what works for you the best. Whenever I am learning something new, books work the best. Books give a structured approach to a new topic and eases your way through the topic. Though I have a Kindle, for technical books I prefer hard copies. Often I find the need to refer back or forward a few pages and it feels best on a physical book. I find blogs useful for learning more about a topic that I already have some idea about. For getting an overview of topics and interesting things I find videos and podcasts useful. So depending upon the need I mix and match these different learning modes.

Current Reading List

Open Source Projects, Forums, Demo application

It is important that you try out whatever your learn. There are different ways that you can learn by doing and depends on the individual. When learning a new topic I choose to come up with a sample application to be built using the new technology that I am learning. I created Picfinity when I was learning Window 8 Modern applications. Once comfortable with a topic I choose to either answer questions on forums (mostly msdn or stack overflow). Contributing to open source projects is also another approach to gets some hands-on experience. Contributing to open source projects also helps the community and improves your self-confidence. Github is a good place to start with open source contribution.

Freelancing

Not always you get to implement what you learn at your work. Implementing in real world projects is an important aspect of learning. Freelancing is a good way to get work in areas that you want to improve or learn. Bid for projects that uses technologies that you are learning. It might be hard to get offered projects on technologies that you have no prior experience with. Showcasing any sample applications that you have built while learning increases your chances of getting the job. I used Picfinity to show that I can build Windows 8 and phone applications. This landed me opportunities to build different windows phone and windows 8 application.

Blog

Sharing your learnings helps improve your own understanding of the topic. It also helps others taking your same path and learning the new topic. Each one of us has a different perspective of understanding and learning things. So don’t worry if you are writing about something that is already been written about. Make sure you blog your learnings and if you are new to blogging then there is no better time to get started. Blogs act as a good resume when applying for a job.

With new technologies and frameworks coming up every other day it is hard to keep up with all of them. It is not necessary to understand everything that is out there. What is important is that you should be able to learn quickly when the need arises. Also keep yourself aware of the changes in the industry and technologies you use. It does not matter if you do not know all the latest technologies. What matters is how fast you can learn a new concept. Have a learning plan! What do you find the most effective way to learn? Sound off in the comments

Edit: Came across this interesting podcast on Hanselminutes where Scott talks to Daphne.

You can now manage Key Vault through the Azure portal. Prior to this the Key Vault’s were managed either using Powershell, ARM Templates or REST API. Managing key Vault is now easy and user-friendly for the nontechnical people. In this post, I will walk through the new features in Azure Portal to manage key vault.

If you find any difference between what you see in the portal versus those in the screenshots below, it’s likely that the portal is updated

In the new Azure portal search for ‘key vault’ to access this new feature. Or you can go to ‘More Services’ and scroll down to ‘Security + Identity’ section in the menu. Selecting Key Vault takes you to all the available Key Vaults under your subscription. You can further filter the Vault’s list by the vault name using the filter box or based on your subscriptions.

Key Vault in Azure Portal

Creating Key Vault

To create a new Key Vault select the ‘Create’ option. By entering the Vault Name, Subscription, Resource Group and Location you can create a new Key Vault.

Create Key Vault in Azure Portal

The pricing tier defaults to Standard and it does not support HSM backed keys. This means you will be only able to create Software keys. You can select the Premium pricing tier if you need HSM backed keys. By default the login with which you create the subscription with gets granted access to the vault and is added to the access policies. You can grant additional applications or users access and control the permissions. Below I am adding in an AD application.

Key Vault Add Access Policy

You can specify the Key Permissions and the Secret Permissions that the ServicePrincipal.

Access Policy set Secret Permissions

Keys and Secrets

Once the Vault is created you can add in Keys and Secrets into the vault. This can be done by selecting the Vault that was just created from the Vaults list. From here you can manage different aspects of the Vault and also manage Keys and Secrets.

Access Policy set Secret Permissions

The Azure Portal experience of Key Vault is good. It covers most of the functionalities needed when using a Key Vault. How do you find the new UI experience?

Consistency is important in blogging. It does not matter if you are bad at writing, have nothing to write or nobody reads your blog. What matters is whether you write on a predictable schedule - once a month, once a week, once a year, one per day etc. If you do not have a schedule you are more likely to fall off from sticking to blogging. I had written before, I always struggled to write regularly. But since the start of this year, I have been blogging to a schedule (once a week) and have been successful with that. This post looks into some of the challenges that I faced and things that I have incorporated to stick to a schedule.

Make writing a habit - write daily

Make Writing a Habit

Writing posts was hard and I always procrastinated on it. It often took me more than a week to complete a post. There’s a saying, If it hurts, do it more often - So to make writing easy I had to write more often.

I write only when inspiration strikes. Fortunately it strikes every morning at nine o’clock sharp.

- Somerset Maugham

Generating Topics

Till a year back I struggled to find topics to blog. I did not find any ‘cool thing’ that was worth blogging. Not much has changed even now on doing ‘cool things’. The only thing that changed was that I decided to write every day - no matter what I do or don’t do.

I have always struggled to find topics to blog, but then I realized all I need is to pay attention to things that I do daily

A year back I asked John Somnez from SimpleProgrammer the same question.

email to John Somnez (SimpleProgrammer) on finding a blogging schedule

Though I did not get a reply from John, it kept me thinking. Late last year I started a Daily challenge of ‘Generating at least one blog topic a day’ for a month. I did not have to write anything, just find a topic to blog and add it to a OneNote list. I did this for a month and found that I had enough topics to write on. Since the challenge did not have anything to do with writing, I did not restrict my thoughts. This is when I realized that it is not the lack of topics that I don’t blog, but the fear of writing. That’s when I started the ‘Write Everyday’ daily challenge.

Write Everyday

This is one of the most important things that I found to be able to write posts regularly - Write Everyday. Yes every day, no matter what happens, I write at least one line towards a post. Since the goal is to Write just one line for a blog post, I hardly miss this goal. Keeping the goal small and simple is a trick that I picked from the book Mini Habits and find it useful. You can follow this for anything that you want to make a habit.

Make it Visible

Seeing something progressing to completion helps me to get it done faster. I used to write elsewhere and then copy it to my blog and publish it. This did not give me a sense of progression. I now have a draft post workflow, integrated with my blogging platform. I can preview this blog along with the draft posts on my laptop. This helps give a sense of progression and also makes the end goal visible.

Making your goal clear and visible helps you achieve it faster

I also use this to keep track of new post ideas. Whenever I find a topic to blog, I add it as a draft. I add bullet points to the post whenever I get more thoughts on that topic. Once there are enough points to write about, I work on that post.

Make Blogging All About Writing

The last thing that I want is to get stuck with mundane repetitive tasks of publishing a post to get in the way of writing. The more regular I became in writing, I started noticing a lot of repetitive tasks in my blogging workflow. When you do any activity repetitively you notice tasks that you do over and over again. I started modifying my blog engine for my needs and to reduce manual effort.

Write in Advance

Having posts ready to be published is helpful as it gives the flexibility to slack a bit when in need. It’s a great feeling to have blog posts ready to be published (right now I have posts ready for a month). I modified my blogging engine to support publishing posts on future dates. This helps me set publishing dates in the future for posts and it gets automatically deployed on the day. Automating the publishing helped make blogging more about writing. I can forget about it as soon as I complete a post by specifying the date I want it to be published and be rest assured that it will get published.

Make Writing Accessible

You never know when you have that free time and want to blog. Maybe it’s while traveling, or waiting in a queue or at your work desk. You might be offline or connected Making it possible to write from irrespective of the kind of device (laptop, mobile, tablet) and connectivity was key for me. I have my draft posts synchronized through Dropbox and available on all my devices. This helps use all the pockets of time that I get for myself.

These are the things that have helped me pick a schedule and follow it. I have been successful in publishing one post per week for the past 10 months and want to keep at it going forward. Now that I am able to publish regularly, I want to improve the speed at which I write posts. Though I reduced the time from over a week to days, I still find this too long and want to reduce it to a day. Modifying the way images are added, optimized and served is also something that I am trying to optimize currently. Do you have a blog? If not start one and stick to a schedule that works for you!

It’s been six months with the Logitech MX Master mouse and I absolutely love it for daily use. I picked this mouse a week after reading Hanselman’s review. It feels just as it sounded in his review - Perfect!.

Logitech MX Master

Key Features

  • The mouse fits perfectly into the hand and supports the wrist in a native position. It is suitable for long duration use.

  • The battery is rechargeable and the mouse comes with a micro-USB charging cable. After a full charge, the mouse lasts me for 2-3 weeks. The mouse can be used while charging, so there is no downtime. The LED lights indicate the charge status. If not the software also warns when the battery is low.

  • I connect the mouse using the Logitech Unifying Receiver. But the mouse also supports connecting over Bluetooth. This is useful in cases when you want to connect with multiple computers.

  • The mouse supports Easy Switch technology. You can pair up to 3 devices and switch between them using buttons on the bottom side of the mouse. I have not used this feature as I mostly work on just one laptop.

  • Darkfield Laser Sensor tracks flawlessly over all surfaces. I use the MX Master across wooden and glass surfaces and find it smooth over both.

  • The mouse scroll supports smooth scrolling and ‘Ratchet’ mode (click, click, click) scroll. It supports automatic and manual mode shifting. So when reading long documents you can be on smooth scrolling and when navigating code or shorter texts you can use Ratchet mode to feel greater control. It also has buttons to

The mouse has an accompanying software that allows customizations of mouse buttons.

Logitech MX Master Software

The software allows setting application specific actions for buttons. For e.g. I have the Gesture button to archive emails in Outlook, ‘Navigate to Definition’ in Visual Studio, close tab in Chrome etc. You can configure this to any action on applications of your choice (provided it supports keyboard shortcuts that can be linked to).

Logitech MX Master Software customizations

Issue

The only issue that I faced so long is with the Mouse scroll mode. The ‘Ratchet’ mode no longer works for me. It is always in smooth scrolling mode and the mechanical key to shift modes does not work anymore. Google tells me that this is a common issue with the mouse . This video walks through on how to open up the mouse and fix the mechanical part that clicks in to switch the scroll modes. I have not tried this myself as I don’t feel much a problem without the Ratchet mode.

Other than the Ratchet mode issue, I have not found any other issues with the mouse.

Trying out the logitech mx master mouse. Liking it!

A photo posted by Rahul P Nath (@rahulpnath) on


If you are looking for a new mouse and willing to spend a little more than usual, I recommend the Logitech MX Master. What mouse do you use?

* Amazon links are affiliated.

Last month I was in Trivandrum (Kerala, India) for a month’s vacation. This was also my first trip back to India after moving to Sydney. While in India I chose not to have a mobile connection and be offline on the go (at home I had a broadband connection). So while traveling places or moving around I had my mobile in airplane mode. But even then there was time that I resorted to my mobile to spend time. When traveling by bus/train, afternoons in hotels (when everyone rests) etc, I had a lot of ‘me time’.

Living an offline Life

Below are some of the things that I have my phone geared up for offline mode.

Blogging

Blogging is one of the things that I enjoy doing most these days. To keep up with my routine, I try to blog every day to keep up with my schedule. I have modified my blogging workflow to give the flexibility to blog from any of my devices. Currently, I am writing this blog while on a train and ‘disconnected’. When back home and connected, all the offline content synchronizes through Dropbox. The content written while offline is now available for publishing from my laptop. Making writing accessible from everywhere helps me to stick with my mini habits.

Reading

I always have books, articles downloaded and available for offline reading. For books, I use the Kindle application. Most of the time I have my Kindle for reading, if not I use the mobile application. For blog posts and articles I use Pocket. Whenever I am connected and find interesting posts to read later, I Add to Pocket. Pocket’s mobile application downloads articles and makes them available for offline reading.

Amazon Kindle and Pocket for offline reading

Game

I usually do not have many games on my mobile. But in case I feel bored I play the game available on Chrome browser (only in offline mode). When offline a T. Rex dinosaur comes up in the browser with the message ‘Unable to connect to the Internet’. Tapping on the dinosaur starts the game.

Google Chrome offline T Rex game

To-Do List

Offline time is good to manage your To-Do lists and do a brain dump. I use Todoist to manage my tasks and activities. The Todoist mobile application stores all tasks offline. It allows adding tasks when disconnected and synchronizes it to the server when connected. I also use OneNote for capturing notes which also has an offline mode.

Trip Planner

For short local trips, the recently launched Google Trips is useful. Google Now already shows trip summaries and alerts on the home page. Google Trips takes this to the next level and allows to manage all trip data at one place. It helps collate all the travel booking in one place and makes it available in offline mode.

Google Trips for managing trip data offline

These are the common things that I have my mobile setup for offline mode. I have had this setup while I was in Sydney, as I try to stay disconnected when commuting to work. Staying off the internet helps me get more things done!

Code Formatting is an important aspect of writing code. If followed well it helps keep the code more readable and easy to work with. Here are some of the different aspects of formatting code and my personal preferences. I then explore options to enforce code formatting and ways to introduce it into an existing code base.

Below are some of the popular formatting rules and those that have a high value when enforced in a project.

Tabs vs Spaces

One of the most debated topic in code formatting is whether to use tabs or spaces to intend code. I never knew such a debate existed until my most recent project. It had developers from different parts of the world and with different preferences. I came across the below excerpt from Jeff Atwood, to which I completely agree.

Choose tabs, choose spaces, choose whatever layout conventions make sense to you and your team. It doesn’t actually matter which coding styles you pick. What does matter is that you, and everyone else on your team, sticks with those conventions and uses them consistently.

That said, only a moron would use tabs to format their code.

- Jeff Atwood

Settings for these are often available at the IDE level. In Visual Studio this is available under Options, Text Editor, All Languages, Tabs. Be aware of what you choose and make sure you have the same settings across your team members.

Horizontal Alignment

Avoid aligning by common separators (=;,) when they occur in adjacent lines. This kind of alignment falls out of order when we rename variables or properties. It happens when you chaange property names.

Not Refactoring friendly and needs extra effort to keep it formatted
1
2
3
4
5
6
var person = new Person()
{
    FirstName = "Rahul",
    LastName  = "Nath",
    Site      = "www.rahulpnath.com"
};
Refactoring friendly
1
2
3
4
5
6
var person = new Person()
{
    FirstName = "Rahul",
    LastName = "Nath",
    Site = "www.rahulpnath.com"
};

Horizontal Formatting

You should never have to scroll to the right - I caught on with this recommendation from the book Clean Code (a recommended read). It is also recommended that a function should fit on the screen, without needing to scroll up or down. This encourages to keep functions short and specific.

We should strive to keep our lines short. The old Hollerith limit of 80 is a bit arbitrary, and I’m not opposed to lines edging out to 100 or even 120. But beyond that is probably just careless

- Uncle Bob

The Productivity Power Tools extension for Visual Studio allows adding a Column Guide. A Column Guide reminds developers their full line of code or comments may not fit on a single screen.

Code Formatting Maximum Width Column Guide in Visual Studio using Power Tools

Aligning Function Parameters

Always try to keep the number of parameters as less as possible. In cases where there are more parameters or longer function names, the team must choose a style. There are different styling formats followed when splitting parameters to a new line.

Allowing parameters to take the natural flow of IDE (Visual Studio) is the simplest approach. This often leads to poor readability and code cluttering.

Function Parameters taking natural flow of IDE

Breaking parameters into separate lines is important for readability. Use the Column guide to decide when to break function parameters into different lines. There are different approaches followed when splitting parameters into new lines. Keeping the first parameter on the same line as the function and then having all other parameters on new line aligned with the first parameter is another approach. This works well when viewed in the same font and resolution used when writing. When you change font or resolution this kind of formatting falls out of place.

Function Parameters on new line aligned with first parameter

A better variant of the above style is to have the parameters in the new line aligned to the left. This ensures parameters stay in the same place when changing font or resolutions. The one that I prefer is to have all parameters in a new line. This formatting works well with different font sizes and resolutions.

1
2
3
4
5
6
7
public int ThisIsALongFunctionNameWithLotsOfParameters(
    int parameter1,
    string parameter2,
    int parameter3,
    string optionalParameter = "Test")
{
}

Visibility Based Ordering

It is a good practice to maintain a specific order of items within a class. Have all properties declared first, then constructors, public methods, protected methods, private methods etc. This is up to the team to determine the order, but sticking on to it makes the code more readable.

Code Analysis Tools

Checking for styling and formatting issues in a code review requests is a boring task. It’s best to automate style checks at build time (local and server builds). Making build throw errors for styling issues forces developers to fix them. Once developers get used to the rules, writing code without any formatting issues becomes second nature. StyleCop is an open source static code analysis tool from Microsoft that checks C# code for conformance to StyleCop’s recommended coding styles and a subset of Microsoft’s .NET Framework Design Guidelines. It has a Visual Studio plugin and also integrates well with MsBuild.

Cleaning up a Large Code Base

Introducing StyleCop (or any code format enforcement) into a large pre-existing code base is challenging. Turning the tool on would immediately throw hundreds and thousands of errors. Trying to fix them in a stretch might impact the ongoing development process. This often causes us to delay the introduction of such enforcement’s into the project and it continues to be a technical debt.

Taking an incremental approach, fixing one by one as and when a file is changed seems a good idea. Teams can come with the Boy Scout Rule - ‘Leave the file cleaner than you find’. Every time a file is touched for a fix, run StyleCop analysis and fix the errors. Over a period of time, this will make the project clean. The only problem with this approach is developers often tend to ignore/forget running the analysis and fix them.

Trivial things like code formatting is hard to mandate within a team unless it is enforced through tooling

Source Control Hooks

We can plug into various hooks that source controls give to enforce code formatting on developer machines. In git, you can add a custom pre-commit hook to run the StyleCop analysis on all the staged files. StyleCopcli is an open source application that wraps over the StyleCop DLLs and allows running the analysis from the command line. So in the hook, I use this CLI to run StyleCop analysis on all the staged files.

1
2
3
4
5
6
7
8
9
10
11
#!/bin/sh
echo "Running Code Analysis"
./stylecopcli/StyleCopCLI.exe -cs $(git diff --cached --name-only)
if [ $? = 2 ]
    then
        echo Commit Failed! Fix StyleCop Errors
        exit 1
    else
        echo No SyleCop Errors!
        exit 0
fi

If there are any StyleCop validation errors the commit is aborted, forcing the developer to fix it. The git hooks work fine when committing from a command line or UI tools like Source Tree. However, Visual Studio git plugin does not run the git hooks and fails to do the check.

StyleCop git hook failing commit in console

StyleCop git hook failing commit in Source Tree

Over a period of time, most of the files will get cleaned and remaining can be done all at once with less effort. Once the entire code base passes all StyleCop rules, this can be enforced in the build server. This ensures that no more bad formatted code gets checked into the source control.

Code is read more than written. So it is important to keep it readable and well-formatted. It also makes navigating code bases easier and faster. These are minor things that are often overlooked by developers, but have a high impact on productivity when followed. Do you enforce code formatting rules in your current project? What are the rules that you find important. Sound off in the comments below!

When creating a subscription for a client, the calculated number of months was off by one at times - This was a bug reported from production application that I was currently working on. Though, not a blocker, it was creating enough issues for the end users that it required a hotfix. One of my friends picked this issue up and started working on it. A while later, while I was checking the status of that bug I noticed him playing around with Linqpad. He was testing a method to calculate the number of months between two dates with different values.

Testing

We often test our code elsewhere because it’s coupled with other code making it difficult to test at the source itself. The fact that we need to test an isolated part of a larger piece of code is a ‘Code smell’. There possibly is a class or method that can be extracted and unit tested separately.

Having to test code elsewhere other than the source is a Smell. Look for a method or class waiting to be extracted

In this specific case, below is how the code that calculates month difference between two dates looked like. As you can see below, the code is coupled with the newAccount, which in turn is coupled with a few other entities that I have omitted. Added to this, this method existed in an MVC controller, which had other dependencies.

Existing Code
1
2
3
4
5
6
7
8
9
10
...
var date1 = newAccount.StartDate;
var date2 = newAccount.EndDate;
int monthsApart = Math.Abs(12 * (date1.Year - date2.Year) + date1.Month - date2.Month) - 1;
decimal daysInMonth1 = DateTime.DaysInMonth(date1.Year, date1.Month);
decimal daysInMonth2 = DateTime.DaysInMonth(date2.Year, date2.Month);
decimal dayPercentage = ((daysInMonth1 - date1.Day) / daysInMonth1)
                      + (date2.Day / daysInMonth2);
var months = (int)Math.Ceiling(monthsApart + dayPercentage);
...

This explains why it was easier to copy this code across and test it in Linqpad. It was difficult to construct the whole hierarchy of objects and to test this. So the easiest thing to fix the bug in is to test elsewhere and fit back in its original place.

Extract Method Refactoring

This is one of the scenario where Extract Method Refactoring fits in best. According to the definition

You have a code fragment that can be grouped together. Turn the fragment into a method whose name explains the purpose of the method.

Extract Method Refactoring is also referred in Working Effectively With Legacy Code and xUnit Test Patterns (to refactor test code). It helps separate logic from rest of the object hierarchy and test individually. In this scenario, we can extract the logic to calculate the number of months between two dates into a separate method.

For Test driving the extracted method, all I do initially is to extract the method. As the method purely depends on its passed in parameters and not on any instance variables, I mark it as a static method. This removes the dependency from the MVC controller class parameters and the need to construct them in the tests . The test cases includes the failed ‘off by one’ case ((“25-Aug-2017”, “25-Feb-2018”, 6)). With tests that pass and fail it’s now safe to make changes to the extracted method to fix the failing cases.

Tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Theory]
[InlineData("10-Feb-2016", "10-Mar-2016", 1)]
[InlineData("10-Feb-2016", "11-Mar-2016", 2)]
[InlineData("10-Feb-2015", "11-Mar-2016", 14)]
[InlineData("01-Feb-2015", "01-Mar-2015", 1)]
[InlineData("21-Sep-2016", "22-Sep-2016", 1)]
[InlineData("25-Aug-2017", "25-Feb-2018", 6)]
[InlineData("12-Aug-2016", "15-Mar-2019", 32)]
public void MonthsToReturnsExpectedMonths(
    string date1,
    string date2,
    int expected)
{
    var actual = SubscriptionController.MonthsTo(DateTime.Parse(date1), DateTime.Parse(date2));
    Assert.Equal(expected, actual);
}

More than the algorithm used to solve the original issue what is more important is in identifying such scenarios and extracting them as a method. Make the least possible change to make it testable and fix step by step.

Whenever there are code fragments that depend only on a subset of properties of your class or function inputs, it could be extracted into a separate method.

Extracted method after Refactoring.
1
2
3
4
5
6
7
8
public static int MonthsTo(DateTime date1, DateTime date2)
{
    int months = Math.Abs(12*(date1.Year - date2.Year) + date1.Month - date2.Month);
    if (date2.Date.Day > date1.Date.Day)
        months = months + 1;

    return months;
}

Introduce Value Object

Now that we have fixed the bug and have tests covering the different combinations, let’s see if this method can live elsewhere and make it reusable. The start date and end date on account always go together and is a domain concept that can be extracted out as an ‘Account Term Range’. It can be represented as a DateRange Value Object. We can then introduce a method in the DateRange Value Object to return the number of months in the range. This makes the function reusable and also code more readable. I made the original refactored method as an extension method on DateTime and used it from DateRange Value Object.

Encapsulate into Value Object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public static class DateTimeExtensions
{
    public static int MonthsTo(this DateTime date1, DateTime date2)
    {
        int months = Math.Abs(12*(date1.Year - date2.Year) + date1.Month - date2.Month);
        if (date2.Date.Day > date1.Date.Day)
            months = months + 1;

        return months;
    }
}

public class DateRange
{
    public DateTime StartDate { get; private set; }
    public DateTime EndDate { get; private set; }

    public DateRange(DateTime startDate, DateTime endDate)
    {
        // Ignoring null checks
        if (endDate < startDate)
            throw new ArgumentException("End Date cannot be less than Start Date");

        this.StartDate = startDate;
        this.EndDate = endDate;
    }

    public int GetMonths()
    {
        return StartDate.MonthsTo(EndDate);
    }
}
... // Rest of Value Object Code to override Equals and GetHashCode

If you are new to TDD or just getting started with tests, introducing tests while fixing bugs is a good place to start. This might also help make code decoupled and readable. Try covering a fix with tests the next time you fix a bug!

References