I prefer to use the keyboard to navigate within sites I use frequently. Keyboard enables to navigate faster within site and perform tasks quicker. The Logitech MX Master mouse provides a lot of capabilities to navigate. But I find the keyboard faster to perform repetitive actions - like posting a new status update on social media sites, managing tasks in Jira, TFS online, GitHub, responding to emails, Todoist, etc. Finding keyboard shortcuts on these various sites can itself be a daunting task. One might need to google to get the shortcuts or find the relevant documentation on the site. Luckily most of the popular sites today display a pop-up modal with all the keyboard shortcuts for the site when pressing Shift + ?. Note that this might not work on all the sites out there. But for most of the common ones that I use l, I find it working.
So the next time you spend a lot of time on a website try hitting Shift + ? to look for supported keyboard shortcuts.
If you are the kind of person that likes to listen to some music while working, then this one is for you. Music is proven to have significant effects on improving focus while working. But it is also quite possible to get fully immersed in the music and forget all about work. Different kinds of music contribute to improving concentration while not getting immersed in the music itself. I usually wear my headphones at work while working and listen to music.
Music helps to minimize distractions and helps you reach a state of Flow, which is ideal for improving productivity.
At times I like to hear to just some background noises simulating different environments. Noisli is an application that helps mix different sounds and create your perfect environment. The sounds could either be working in a restaurant or while it’s raining with thunder, in a forest, by the side of the fire, etc. It helps create the mood that you want and helps recreate it with sound. Since the noises are of a fixed pattern, you soon get used to it and improves concentration at work.
At one of my clients, we were facing an issue of missing some part of a form data when processing a Submit request on the form. As per the current design, the form autosaves data to the database as and when user types in. When the user finally submits the form to be processed the Controller gets all the relevant data from the database and sends for processing. But we noticed that the processing requests missed parts of data in the request send for processing even though the database had those values. This was a clear case where the forms submit request got processed even before all the forms data was saved. The UI was enabling the Submit button right after all the UI validations were made and asynchronously firing off saves to the database.
Let’s not go into the design discussion of whether the UI should be sending in all the data to be processed as part of the Submit request as opposed to just sending a reference id and have the controller get all the data from the database (which it is currently doing). The quick fix for this problem was to enable the submit button only after all the asynchronous save requests (the ones for autosave) came back with a success response. The fix was simple but testing this was a challenge.
We wanted to delay a few HTTP requests to check how the UI behaved
When using automated tests there are a lot of frameworks that can help delay requests. But in this case, we were relying on manual tests.
Using Fiddler to Delay Requests
Fiddler is an HTTP debugging proxy server application, that captures HTTP and HTTPS traffic and displays to the user. It is one of the tools that I use almost every other day. In Fiddler, we can create rules on web requests and modify how they are handled and responded. Most of the functionality is available under the AutoResponder tab. We had seen earlier how to compose web requests and also simulate error conditions in Fiddler. Here we will see how to use Fiddler to delay request/response time. In Fiddler, we can either delay the request itself being sent to the server or delay the handover of response back to the calling application once it is received from the server.
By setting delay on a request we can specify the time to delay sending the request to the server. The value is specified in milliseconds. When a request that matches the condition set (in this case an EXACT match with a URL) fiddler delays sending this request to the server by the set amount of time.
Delay sending request to the server by #### of milliseconds
Drag’n Drop the request the URL (1) into the AutoResponder tab (2) and from the dropdown (3) under the Rule Editor choose delay and set the delay time. Click Save (4). Make sure that the request and rules are enabled (5 & 6).
By setting latency on a request we can specify the delay before a response is received. When a request that matches the condition set fiddler sends the requests to the server immediately. Once the response is received it delays passing the response back to the calling application by the set delay time in milliseconds.
Induce a delay (latency) before a response is returned.
Drag’n Drop the request URL (1) into the AutoResponder tab (2). Right click on the URL and select ‘Set Latency’ (3). Enter the latency time in milliseconds and OK. Make sure that rules and latency options are enabled (4 & 5)
Using these options we delayed all the autosave requests going off the form. This delayed saving the data in the database and the forms Submit request once processed did not have all the required data. It also helped us test after the fix and helped ensure that the submit button was enabled only after all form data was saved. In both the above examples, I chose EXACT match condition to set the delay/latency. This will delay only the specific requests. To modify all the requests you can use a different regex match condition. To simulate a random time delay or latency among different requests you can even use Fiddler Scripting and set the delay time using a random number. This helps simulate a slow internet connection scenario and test how the application responds to it.
I share a lot of information copying and pasting from websites. Most of the times I want to share just the text and not any formatting. To remove any text formatting, I copy it into Notepad and then onto the destination. Yesterday I found that using Ctrl + Shift + V to paste (instead of Ctrl + V) removes all text formatting.
As shown, when using Ctrl + V the text gets copied along with the formatting as shown below.
When using Ctrl + Shift + V, only the text gets copied and the formatting is ignored.
Using this shortcut saves time as I no longer need to open Notepad for this!
PS: As one of my readers rightly pointed out (below in the comments), this might not work on all applications. I have tried this on Chrome Browser and Lync For Business and worked fine.
Code review is an essential practice of the development life cycle. It helps improve the code quality, unify team practices, share knowledge, mentoring etc. over a longer period of time. It helps find mistakes that are overlooked while developing and helps improve the overall quality of the software. This helps accelerate the deployment process as changes are more likely to pass through testing.
Peer review—an activity in which people other than the author of a software deliverable examine it for defects and improvement opportunities—is one of the most powerful software quality tools available. Peer review methods include inspections, walkthroughs, peer desk checks, and other similar activities.
Below are some of my thoughts on the various aspects involved in a Code Review.
Sending a Review
Before sending for a code review make sure that only the necessary files for the change are added in the review. Often it happens that when we write code there are remains of things that we tried and discarded, like new files, packages, changes to project metadata files etc. Double check and make sure that the changes are what are just required. Ensure that the code builds successfully. If there are any build scripts that your team uses, make sure that those are run and passes successfully. When submitting a code review make sure that you reference the associated work item - be it a bug, story, task etc. Add tests. Add in a description detailing the change and any reasoning behind it to add in more context. This will help the reviewer understand the code much faster. Add in relevant people for the review and submit a request. Check out some great tips for a better-looking review request.
Handling Review Comments
One of the key things in a code review and one that’s often missed and drives people frustrated is that they try to take it all in.
Not all comments in a review needs to be addressed
If a review comment points out a mistake in logic or business functionality or conflict with other code you need to fix them, unless you think the reviewer is wrong. But for suggestions on how better to structure your code or refactor into a more readable code, naming, style formatting etc needs addressing only if you feel they are adding value. But make sure to communicate well with the reviewer and reach an agreement.
Look at comments as a way to improve your code and help the team and business. Go in with a positive attitude. When seen as an overhead or an extra ritual, code reviews can be really painful and depressing. Make a note of commonly occurring comments or mistakes you are making and try to handle them at the time of development. Rather than mechanically going through the code review and making changes to the code, internalize on the change and try to see the benefits of a change. This helps to incorporate such suggestions in future reviews as well.
Responding to a Code Review
I usually find myself following the below three variations when coming to replying to a code review request
Comment and Wait I leave comments on the review but do not approve. This means that I would like to have those comments actioned and a new pull request be raised for that. This often falls into those cases where there are logic or business issues.
Comment and Approve I leave comments (if any) but also approve the code review. This means that the code Looks Good To Me (LGTM), but would be better with the comments addressed. These comments generally relate to better formatting, improved on naming or refactoring readability.
Add Relevant People Add in reviewers that I feel are missed and relevant for the part of the code that is changed. This I do irrespective of the above two options if I feel someone else needs to take a look. In these cases, if it was my review that gets added in an extra reviewer I would wait to get a sign off from that person too.
When reviewing code look first for the functionality that the code change addresses. It is possible that we get carried away just by the technical aspect of code and ignore the business aspect altogether. If you have Acceptance Criteria defined for tasks then it’s worth reading it before doing the code review to get more context.
Once the business aspect is covered have a look at the technical aspect of the change. Whether the code is decoupled, has the correct abstractions, follows team conventions (best if automated). Check for commonly occurring problems like improper usage of dispose pattern, magic numbers, large methods, all code flow paths not handled etc. See if the new code fits into the overall architecture of the application. Look for tests and ensure the validity of the test data. Look out for overengineering or not invented here syndrome.
Code formatting is as important as the code itself. Code is read more often than written, so we should try and optimize code for reading. I would prefer to automate this as far as possible so that people don’t need to look for these in reviews. I feel that is often time not well spent and also tends to lead to longer discussions (tabs vs Spaces). When it is part of the build and automated people seldom complain about it and in a very short period of time, the formatting rules become second nature to them. If you currently do not have automated checks you can gradually introduce formatting checks into your builds for a large code base.
Don’t go by ‘It’s done like that everywhere so I will keep it the same’
There might be a lot of practices that is being followed over the period of time. But if you find any of the practices making it harder on a day-to-day functioning of the team, take a step towards changing the practice. I am not a fan of ‘clean it all at once’ style of approach. I prefer to gradually introduce the change for two reasons
- No need to stop or allocate people to repeatedly do the same task of cleaning it everywhere. (Unless there is a very strong business justification to it)
- You get gradually introduced to the new way of doing things. This gives time to reflect and compare with the old way. You have time to correct yourself if the new approach is not fitting well either or causing more trouble than previous.
Foster environments where you don’t curb discussions or other people’s ideas but encourage everyone to actively participate and throw around even the stupidest of an idea.
Psychological safety is a “shared belief, held by members of a team, that the group is a safe place for taking risks.” It is “a sense of confidence that the team will not embarrass, reject, or punish someone for speaking up,” Edmondson wrote in a 1999 paper. “It describes a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.
Code reviews should also be seen as a way to incorporate better practices from fellow developers and as a learning mechanism. Don’t take comments personal, but look at it for what they are. When you have a conflicting opinion you can reply to the comment with your thoughts and cross check with the reviewer. Rarely it can happen that you have conflicting opinions on code review comments and you are not able to solve it among the people involved. Walk up to the person (if you are co-located) or have a conversation over your teams messaging application. But make sure that it stays healthy. In case the discussion is not going the intended way you can involve senior team members or other fellow team members to seek their opinions too. If such kinds of conflicts are happening more often then the team needs to analyze the nature of review comments that these occur on, if it’s between specific groups of people or any visible patterns and try to address them.
When taken in isolation any practices that a team does take time. So disregard any activity just because it adds more time to your process. When seen as part of the overall development cycle and the benefits it brings to the business, Code Reviews proves to be an essential practice. Different teams tend to have different guidelines and checklists for the reviews. Follow what works best for your team. Do you do code reviews as part of your development cycle? What do you feel important in a Code Review? Sound off in the comments!
Over the past couple of years, I have read a lot of Self-help books. Here are a few that I liked and have drawn ideas from. Self-help books in itself is an easy way to procrastinate as you get an immediate high on knowing how optimized and productive your life can be. I have fallen for this a lot of times and its hard to keep away from it. Every time I read one of these books, I get the feeling that this is going to change my life. But then when I get to the end of the book all I need is more of it and ends up starting with a new book. Slowly with time I started to realize that all that is happening is reading and very little action.
To be successful with any of these books you need to draw ideas into your daily life and practice them. You need to build short term goals and get the behavior you are trying to create into you.
Reading without action is just another way to procrastinate and is a waste of time.
Take notes from the books, see how it can be incorporated into your daily life and improve one at a time. Revisit the notes often. I prefer reading this genre of books on the Kindle (technical books I prefer physical copy), as it requires little flipping back and forth. On Kindle, you can add or remove bookmarks, highlights, and notes at any location and revisit this at a later point in time.
Hope some of these helps you as well. Have a great year ahead!
2016 was a great year and I thought sharing some of the things that went well, those that didn’t and setting goals for 2017.
2016 was a great year and is the first one where I am writing a ‘year in review’ post. Blogging, Videos, Open Source and Community contribution are some of the things that went well. FSharp, Reading, Travel, Photography and Exercise did not go that great. Looking forward to 2017 and planning to keep the goodness of 2016 and add some more to it
What went well
This has been a great year with my blog. On an average, I published four blog posts a month. It started with a self-challenge from March to write every day. I was able to come up with eight posts that month, but then felt it was not something that I could stick with consistently. So I kept a target of four posts per month and stuck to it for the rest of the year. I automated a lot of mundane tasks in my blogging workflow right from creating draft posts to deploying posts and scheduling posts for future deployment. This has saved a lot of time for me and helps me stick just with the writing part of blogging!
Sticking with publishing posts on a regular interval was more about deciding that I have to write every day. I set a ‘mini habit’ to write every day - at least one line every day - and stuck to it. This helped me get over the initial inertia of starting to write a post. Having set this goal to myself I had to consistently come with topics to blog about. This very much changed the way that I approached my day job. I always involved in it with the need to find something to share, in a way that I can abstract it out from the business dependencies. Most of the decisions and issues that happened to us (the development team) are now documented here. This acts as a documentation for any new joiners and makes the ramp up to the project a bit easier. More than anything it definitely helps me find the solution when I come across the same issue again. So if you still don’t have a blog of your own, there is no better time than now. Make it a new year resolution. Get a URL and start writing. I have also been successful in getting a couple of people (at least four that I personally know of) to blog. Getting started is the biggest hurdle, the rest will fall in the due course of time.
Starting a Youtube channel had been something I wanted to do for a long time. I have just got started with it and posted one video. I now understand why people say recording is hard - there is a lot to it and it takes time to be good at it. The only way to get better if something is hard is to do more of it - Frequency Reduces Difficulty I plan to start with one video per month and see how it goes. I have been procrastinating on the second video for a while,
will see if I can get it out before this year! Got it out just in time!
Open Source and Forums
Contributing to Open Source projects is a good way to learn. I have always struggled to find projects/issues to contribute to. But then I learned that it is again just a matter of deciding and committing to. First Timers only a good way to find issues and projects that one can possible jump right into. I decided to start looking at projects that I use more on a day to day basis - Asmspy and Autofixture were something that interested me. There were a couple of open issues in AsmSpy, that I picked up and Mike Hadlow was more than happy to merge them in. I also decided to set up Chocolatey package for AsmSpy. On Mike’s request, I now manage the chocolatey package account and am a contributor on AsmSpy project. I automated the deployment pipeline for AsmSpy so that I do not have to worry about deploying the chocolatey package every time a change is made. Also got to contribute to a few issues with AutoFixture which is managed by Mark Seemann (ploeh). I was also lucky enough to meet him in person at NDC Sydney.
I keep a look at the Azure Key Vault MSDN forum and try to help every time a question comes up. Answering questions on forums is also a good way to learn, find interesting problems and at times rewarding.
What didn’t go well
FSharp: Learning FSharp is something that I really want to, but it’s not been happening that well. I am on and off with this and it keeps getting sidelined.
Travel & Photography: There’s not been much travel except for the long vacation back home and a few local places in Sydney. Though I have been clicking along I was lazy to process them. There are still lying on my camera waiting to be processed.
Exercise: Getting enough exercise is something that I have really lacked last year and I think I have also put up some weight because of it. Except for the walk up and down from home to station for the office commute, there’s not been much of my body moving.
Goals for 2017
In preparing for battle I have always found that plans are useless, but planning is indispensable.
― Dwight D. Eisenhower
- Blogging: Stick to at least four posts per month. Try to see if I can get up to 6-8. I will have to improve the time that I take to write a post for this. Handling images for the blog needs to be automated as I spent some time converting and optimizing them.
- Videos: Create a schedule for publishing videos and improve the quality and delivery of the videos.
- Reading: Set up a reading plan and read at least 21 books (1 more than what I did in 2015).
- FSharp: Learn, Contribute and Blog
- Travel & Photography: One trip at least once in 3 months and post photos
- Exercise: Run/Bike at least once a week.
How often have you gone into a class to see the implementation when consuming the class or an interface? I do this almost every other day and it’s mostly to check how the code handles boundary conditions. What does it do when there is no value to return, does it need all parameters etc. Reading code is hard and time-consuming, even if it’s a code that you yourself have written a few minutes back. Imagine every developer having to go into the implementation detail anytime they consume a class? Bertrand Meyer in connection with his design of the Eiffel programming language coined the term Design By Contract, an approach for designing software. The central idea of Design By Contract is to improve the contracts shared between different components in the code base. In this post, we will see how we can improve our C# code and avoid unnecessary guard statements across out code base.
These days in programming we tend to abstract a lot more than what we really need. Dependency Injection and use of IOC containers have started forcing ourselves to think that everything needs to be an interface. But essentially this is not the case. But the bigger problem lies not in the abstraction, but on depending on the implementation details after abstracting. A leaky abstraction is an abstraction that exposes details and limitations of its underlying implementation to its users that should ideally be hidden away.
Consuming abstractions assuming a certain implementation is bad practice
Recently I came across the below code during a code review. Even though an empty string was not a valid configuration value that was not being checked here as the repository implementation returns a null when there is no entry.
1 2 3 4 5
This is a common practice and I have myself fallen for this a lot of times. The fact that the repository returns only a null value is an abstraction detail and is not clear from the contract that it exposes. Anyone could change the repository to start returning an empty string. This will then start failing this code. When taken in isolation the code that uses ‘config’ must check for null and empty to avoid invalid values. The abstraction contracts (function signatures) must convey whether it always returns a value, whether it can be empty or null. This helps remove unnecessary guarding code or makes guarding mandatory across the code base and also indicates a clear intent.
The Robustness Principle is a general design guideline for software
Be conservative in what you do, be liberal in what you accept from others (often reworded as “Be conservative in what you send, be liberal in what you accept”).
Applying this principle in this context, we must be conservative in what we return from our function (be it a class or interface) contract. The contract should be as explicit as possible to indicate the nature of values that it returns.
Stronger Return Types
A repository returning a string is a weak contract, as it does not clearly express the nature of value it returns. It can return either of these three values - null, an empty string or a valid configuration string. In our application, assuming that null and empty string are invalid we should be having a single representation for this state in the application. C# by its very design encourages us to use this pattern as it embraces the concept of null’s - the billion dollar mistake. But this does not mean we are restricted by it. We can bring in concepts from other languages to help us solve this problem. In F# for example, the Option type represents presence or absence of a value. This is similar to the Nullable type in C#, but not just restricted to value types. Option type is defined as union type with two cases : Some and None. Whenever consuming an option type the compiler forces us to handle both the cases
In pure F#, nulls cannot exist accidentally. A string or object must always be assigned to something at creation, and is immutable thereafter
1 2 3 4
Though C# does not have anything out of the box to define optional values, we can define one of our own. The Maybe class is one such implementation of an optional concept. The name is influenced by the option type in Haskell, Maybe. There are also other implementations of Maybe but the concept remains the same - we can represent an optional type in C#. The code contracts are stronger using Maybe as a return type. If a function always returns a value, say a string, the function contract should remain as a string. If a function cannot return a value always and can return null/empty (assuming that these are invalid values) then it returns a Maybe
You can write different extension methods on the Maybe class, depending on how you want to process the value. In the above example, I have a Do extension method that calls on to a function with the configuration value if any exists. By explicitly stating that a value may or may not be present we have more clarity in code. No longer do we need any unnecessary null checks in the case where a value is always present. This is best achieved when agreed upon as a convention by the development team and enforced through tooling (like code analysis).
One of the root problem for having a lot of null/empty checks scattered across the code is Primitive Obsession. Just because you can represent a value as a string, it doesn’t mean that you always should. Enforcing structural restrictions imposed by the business is best done by encapsulating these constraints within a class, also known as a Value Object. This leads to classes for representing various non-nullable values for e.g. Name, configuration, Age etc. You can use this in conjunction with Null Object pattern if required. A value object is a class whose equality is based on the value that it holds. So two class instances with same values will be treated equally. In F# you get this by default but in C# you need to override Equals and GetHashCode functions to enforce this equality.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Modeling concepts in the domain as classes helps you to contain the domain/business constraints in a single place. This prevents the need to have null checks elsewhere in the code. Value objects being immutable helps enforce class invariants.
The above two methods help create a stronger contract in code. As with any conventions, this is useful only when followed by the whole team. Conventions are best followed if enforced through tooling. You can create custom code analysis rules to enforce return type to be of type if any method is returning null. Even if you are introducing this into a large existing code base you can do this incrementally, by starting to enforce them on commits (if you are using git) like when introducing styling into an existing project. What other contracts do you find helpful to make the code more expressive?
While working with clients I often get into conversations with the domain experts and people involved directly with the business. The discussions usually happen around what the process they are currently doing and how to automate those. Knowing the business process is helpful but getting influenced by that to design the solution is often not effective. Recently our team was in a conversation with a domain expert for a new feature request.
Domain Expert We need to charge customers a processing fee if they pay using an electronic payment method. Depending on the type of card (mastercard, visa etc) the processing charge percentage differs. The processing fees are always charged in the subsequent billing period after their current payment. For e.g. if a customer pays 1000$ for the month of November, then his December bill will have 2% card processing charge in the December invoice
Team That sounds easy, think we have enough details to get started on this. Thank you.
Domain Expert Perfect. Ahhh… Before you go, I think this can be a hangfire job that runs on 29 every month, a few days before the billing date, 3rd, and generate these charges for the client. This is what we do manually at present. (And walks off)
Team Discussing amongst themselves the team agreed that creating a recurring job is the way to go. Based on the assumption that this job will be run only once a month, the job was to read all the invoices from 29th of the previous month till 28th of the current month and charge the clients. The meeting was dismissed and off went everyone busy to get the new feature out
Business has Exceptions
Problems started coming up the immediate month of feature deployment. Below is the sequence of events that happened.
- 29th : Nice work, team! The processing charges have been applied as expected.
- 30th : Some of the invoices have wrong data. We have deleted them. Can you run the job?
- 2nd : A few of our clients (as usual) paid late and we need to charge their processing fees. Can you run the job?
- 15th : One of our clients is ending tomorrow, so we need to send them an invoice and it should include the processing fees for their last payment. Can you run the job?
But wait! We had decided that we will run this job only once a month and that is the only time we need to process the charges. We cannot run that job over and over again.
What I’ve noticed over the years is that our users find very creative ways to achieve their business objectives despite the limitations of the system that they’re working with. We developers ultimately see these as requirements, but they are better interpreted as workarounds.
The business was right when it said that ‘This is what we do manually at present. What they did not say though is that there were always exceptions. And in these cases, they did the same process, but just for those exceptions. Business process mostly will be around the majority of the cases and the exceptions always get handled ad-hoc. So for the business it’s always that which takes a good part of their time that matters more.
Finding the Way Out
The problem, in this case, was that the team modeled the solution exactly as the business did manually. Think kind of a solution is most likely to fail in case of exceptions. The human brain can easily deal with these exceptions. But for a program to solve it that way it needs to be told so, which implies that there need to be alternate flow paths defined. So with the improved understanding of these exception cases, the team does another analysis through the problem. After some discussion the team re-defines their original problem statement - *We need to be able to run the job any number of times and it should have the same effect.
A payment should get one and only one processing charge associated, no matter however times it is seen by the job.
With the new implementation, we decided to maintain a list of payments (a strong identifier that does not change) we have seen and processed. So every time a payment is seen, it is matched to see if it is already processed. If a charge is not already applied, a charge is applied and added to the list of processed payments. This ensures that they can run the job anytime. The team added in capability to specify the time range to look for invoices. By default, this ranged from 29th - 28th. The team also added in a way to void out payment charges applied, so that whenever the invoices changed then can just clear that off and re-run the job. These changes gave the flexibility to meet the businesses exception cases.
The term idempotent is used in mathematics to describe a function that produces the same result if it is applied to itself, i.e. f(x) = f(f(x)). In Messaging this concept translates into a message that has the same effect whether it is received once or multiple times. This means that a message can safely be re-sent without causing any problems even if the receiver receives duplicates of the same message.
Being idempotent is what we missed with the first implementation. There was an assumed ‘idempotency’ that the job will be run only once a month. But this constraint is not something that the code had control of and something it could enforce. The job was also not idempotent at the granular level that it was affecting - payments. Asserting idempotency at the batch level fails when we want to re-run batches (when exceptions like the wrong invoice happens). Idempotency should be enforced at the unit level of change, which is what maintaining a list of processed payments helps with. Any payment that is not processed before will get processed now when the job is run. We can also ensure that the payment will only be processed at most once.
This is just an example where we fail to see beyond the business problem and also see the computing problems accompanying it. Not always will it be easy and fast to rewrite the code. Even if we fail to see these problems the business will eventually make us to. But it is when we can see the computing problems that accompanies a business problem that we start becoming better developers. Applying basic computing principles, probing the domain expert during discussions, sitting with domain experts while they work etc. are all good ways to start seeing the untold business processes. Hope this helps the next time you are into a meeting with domain expert or solving a business problem.
I came across this question in Azure Key Vault forums looking for options to get notified when Key or Secrets in vault nears expiry. It’s useful to know when Object’s (Keys/Secrets) near expiry, to take necessary action. I decided to explore on my proposed solution of having a scheduled custom PowerShell script to notify when a key is about to expire. In this post, we will see how to get all objects nearing expiry and scheduling this using Azure Runbook to run daily.
Getting Expiring Objects
Both Keys and Secrets can be set with an Expiry date. The expiry date can be set when creating the Object or can be set on an existing Object. This can be set from the UI or using PowerShell scripts (setting the -Expires attribute).
Key Vault (at the time of writing) throws an exception when an expired key is accessed over the API. Also, it does not provide any notification whenever a key/secret is about to expire. The last thing you want is your application go down because of an expired object in the vault. With the Get and List access on the vault, we can retrieve all the keys and secrets in the vault and loop through the elements to see objects that are nearing expiry.
The PowerShell script takes the Vault Name, number of days before with alert should be raised and flags to indicate whether all versions of keys/secrets should be checked for expiry. The full script is available here.
1 2 3 4
All keys and secrets are converted into a common object model, which contains just the Identifier, Name, Version and the Expiry Date if it has one.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Depending on the flag set for retrieving all key/secret version, it fetches objects from the vault and returns in the common object model above.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
Now that we have all the keys and secrets we want to check for expiry all we need to know is if there are any keys that are expiring in the upcoming days.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Scheduling Expiry Notification using Azure Runbook
You can either run this manually every time you want to get a list of objects that are expired or nearing expiry. Alternatively, you can set up a scheduled task to run the script at a set frequency. Since you are already on Azure, you can try Azure Automation and schedule the task for you. A Runbook in Azure Automation account can be granted access to the key vault. The Automation account should have ‘Run as account’ setup and the service principal created for it can be used to assign Access Policies to access the vault. Check out the Accessing Azure Key Vault From Azure Runbook post for a step by step instruction on how to setup runbook to access key vault. You can then schedule the runbook to execute at fixed time intervals. Feel free to modify the script to send email notifications or push notification or any other that matches your need.
Generally, it is a good practice to rotate your keys and secrets once in a while. Use the expiry attribute to set the expiry and force yourself to keep your sensitive configurations fresh. It’s likely that such a notification feature will be built into the key vault. But till then Hope this helps to keep track of keys or secrets that are nearing expiry within the vault and take the necessary action to renew them. The full script is available here.