In an earlier post, we saw how to enable Role-Based Access for .Net Core Web applications. We used hardcoded AD Group Id’s in the application as below

"AdGroups": [
  {
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
  },
  {
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"
  }
]

To avoid hardcoding the id’s in the application config, we can use the Graph API to query the AD groups at runtime. The GraphServiceClient from the Microsoft.Graph NuGet package can be used to connect to the Graph API. In this post, we will see how to use the API client to retrieve the AD groups. We will see two authentication mechanisms for the Graph API - one using client credentials and also using Managed Service Identity.


Using Client Credentials

To authenticate using Client Id and secret, we need to create an AD App in the Azure portal. Add a new client secret under the ‘Certificates & Secrets’ tab. To access the Graph API, make sure to add permissions under the ‘API permissions’ tab, as shown below. I have added the required permissions to read the AD Groups.

private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var clientId = "AD APP ID";
    var secret = "AD APP Secret";
    var domain = "mydomain.onmicrosoft.com";

    var credentials = new ClientCredential(clientId, secret);
    var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{domain}/");
    var token = await authContext.AcquireTokenAsync("https://graph.microsoft.com/", credentials);
    var accessToken = token.AccessToken;

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}

Using Managed Service Identity

With the client credentials approach, we have to manage the AD app and the associated secrets. To avoid this, we can use Managed Service identity (MSI), and the Azure infrastructure will do this for us automatically. To use MSI, turn on Identity for the Azure Web App from the Azure Portal.

For the MSI service principal to access Microsoft Graph API, we need to assign appropriate permissions. This is not possible through the Azure Portal, and we need to use PowerShell script. As before, we only need permission to read the Azure AD Groups. ‘00000003-0000-0000-c000-000000000000’ is known Application ID of the Microsoft Graph API. Using that, we can filter out the App Roles to read the AD Group permissions.

Connect-AzureAD
$graph = Get-AzureADServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
$groupReadPermission = $graph.AppRoles `
    | where Value -Like "Group.Read.All" `
    | Select-Object -First 1

# Use the Object Id as shown in the image above
$msi = Get-AzureADServicePrincipal -ObjectId <WEB APP MSI Identity>

New-AzureADServiceAppRoleAssignment `
    -Id $groupReadPermission.Id `
    -ObjectId $msi.ObjectId ` 
    -PrincipalId $msi.ObjectId `
    -ResourceId $graph.ObjectId

As we have seen in the previous instances with MSI (here and here) we use the AzureServiceTokenProvider to authenticate with MSI to get the token. The ClientId and secret are no longer required. The AzureServiceTokenProvider class tries to get a token using Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication. In our case, when deployed to Azure, the code uses MSI to get the token.

private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    string accessToken = await azureServiceTokenProvider
        .GetAccessTokenAsync("https://graph.microsoft.com/");

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}

Getting AD Groups Using Graph Client

With the GraphApiClient, we can use it to get the Groups from the Azure AD as below. These groups can be used to configure the Authorization policy, as shown below.

 public void ConfigureServices(IServiceCollection services)
 {
     ...
     services.AddAuthorization(options =>
     {
         var adGroups = GetAdGroups();

         foreach (var adGroup in adGroups)
             options.AddPolicy(
                 adGroup.GroupName,
                 policy =>
                     policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
     });
     services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();
 }

 private static List<AdGroupConfig> GetAdGroups()
 {
     var client = GetGraphApiClient().Result;
     var allAdGroups = new List<AdGroupConfig>();

     var groups = client.Groups.Request().GetAsync().Result;

     while (groups.Count > 0)
     {
         allAdGroups.AddRange(
                    groups.Select(a => 
                        new AdGroupConfig() { GroupId = a.Id, GroupName = a.DisplayName }));

         if (groups.NextPageRequest != null)
             groups = groups.NextPageRequest.GetAsync().Result;
         else
             break;
     }

     return allAdGroups;
 }

The AD groups no longer need to be hardcoded in the application. Also, with Managed Service Identity, we do not need any additional AD app/credentials to be managed as part of the application.

Hope it helps!

Brisbane To Gold Coast Cycle Challenge, B2GC 2019

A short recap of the day - Very well organized event!

Brisbane To Gold Coast Cycle Challenge (B2GC) is a fun ride event - but for me, it was a race, a race against myself. The longest I had ridden before this was 50km. For B2GC, I had decided not to take the rest stops and head straight for it.

The event was very well organized. A big thank you to all the organizers and volunteers.

Things I carried on my bike (Check out my post here for specifics of my bike and accessories I use)

  • 4 Energy gels
  • 1 Oats bar - did not use
  • 1 bottle water + 1 bottle Electrolyte
  • Mini Toolkit + puncture kit + 1 spare tube + mini pump - Had all of these in my Aero Wedge
  • Wallet (Id Card + some cash)
  • Mobile Phone

Things in the bag (handed over at the cloakroom at start site)

  • Thongs/sandals - Wore it after the ride
  • A pair of clothes - did not use

B2GC 2019 was on September 15, 2019 - a warm and sunny day and perfect for riding. I woke up at 4 am and got ready. I had put my bike in the car the previous night and packed all the things. Made sure I had everything I needed. Said goodbye to my wife and started for UQ at 4: 30. It was a 30-minute drive and reached there at around 5 am. Lots of cyclists were already there and getting ready for the early start with the red category. I planned to start with the blue category (< 25km/hr).

Parked my car at P10 - UQ Centre car park as instructed in the web site and that was quite close to the start point. I am not familiar with putting on the bike wheels, and it took around 10 minutes for me. Once all set I set off to the start point. Dropped off my bag at the cloakroom - they took a 2 dollar donation to get the bag over to the finish site at Gold Coast. Hit the loo after a short 10-minute queue. I started the ride at around 6 along with all the other blue bib holders, from the Eleanor Schonell Bridge – aka the Green Bridge.

The markings along the way were quite clear; there is no way someone would lose their way. Throughout the ride, I had fellow riders in my front and back. At most major intersections there were volunteers and police officers stopping the traffic and making way for the cyclists. I have not ridden much with the cycling shoes and cleats and did some face some difficulty clipping off and on at signals. For my work commute, I use regular running shoes, so I don’t have to carry an extra pair for work. In total, I got around 4-5 red stop signals (welcoming for me as I got to stretch my legs). At one signal, I did fumble a bit on my clipped-in side as I came to a stop. Lucky for me, I didn’t fall over.

When I crossed the finish line at Gold Coast, my Garmin showed a little over 91 km. I rode to the back of the finish line through a bikeway to make it a full 100. Though I started with the blue category, I finished as an Orange (with an average of 26.3 km/hr) and pretty happy with my finish time. After the race, there was food and coffee (paid) and lots of stalls. I ate a sausage and rested for a while. I had pre-booked my bus tickets, as part of my B2GC registration, to get back to Brisbane. The bikes were taken in a separate truck and to be dropped off at the same place as you board the bus. I took the 10:30 bus back and arrived in Brisbane around 11:45. The bicycles truck arrived about 10 minutes later, and I was back at my car by 12:15. Loaded the bike back in the car and headed home!

Had a great ride and kudos to everyone who participated in the event!

You can find all the photos I took (and the official ones of me) here

It’s not often that you want to debug into applications running on a Virtual Machine, but not to say that it is never required. Recently at one of my clients, I had to debug into an application running on an Azure Virtual machine. I wanted to debug an application with Azure AD group logic, and my laptop was not domain-joined. This called for remote debugging with the application running on a domain-joined virtual machine.


In this post, we will look at how to set up the Virtual Machine and Visual Studio for remote debugging. If you are interested in watching this in action, check out the video in the link above.

Setting up Virtual Machine

To be able to remote debug to a Virtual Machine it needs to be running the Remote Debugging Monitor (msvsmon). Assuming you have Visual Studio installed on your local machine (which is why you are trying to debug in the first place), you can find msvsmon under the folder - C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\Remote Debugger. The path might slightly different based on the version of Visual Studio and the Subscription (Professional, Community, Enterprise). The above path is for Visual Studio (VS) 2019 Professional version. Copy over the Remote Debugger folder into the virtual machine. Alternatively, you can also install the Remote Debugger tools from the internet. Make sure you download the version that matches the Visual Studio version that you will be using to debug.

Run the msvsmon.exe as an Administrator, and it will prompt you to set up some firewall rules on the Virtual Machine as below.

Once confirmed it adds in the following firewall rules for the x64 and x86 versions of the application.

The Remote Debugging Monitor listens to a port (each Visual Studio version has a different default) - 4024 for VS 2019. This can be changed under Options if needed. For this example I have turned off Authentication as shown below.

Azure Portal Settings

In the Azure Portal, under the Networking tab for the Virtual Machine, add an inbound port rule to open the port that msvsmon is listening - 4024 in this case

Debugging From Visual Studio

Now that everything is set up, we are good to debug the application from our local machine. Make sure the application to be debugged is running in the Virtual Machine. From Debug -> Attach To Process in Visual Studio. Choose Remote (no authentication) under the Connection type and enter the IP address or the fqdn to the VM along with the port number. Shortly you should see the list of applications running on the machine, and you can choose the appropriate app to debug.

Visual Studio is now debugging the application running on the Virtual Machine. Hope this helps and happy debugging!

Cycling To Work: What's in My Bag

The best thing about not being able to remote work is THE COMMUTE

Commuting on a cycle to work is one thing that I look forward to every day. The initial inertia to get started is very high - but trust me, do it once, and you are very likely to continue doing it. I try to commute to work on my Giant TCR, three days a week. The other two days I go for an early morning run, so I take the bus to work. I use an Osprey Radial 34 as my commute bag. The bag in itself is excellent and fits comfortably everything I need for the day. It also provides exceptional riding comfort with the padded mesh back. I would recommend the 34L version if you cannot leave things at work (like a pair of Jeans etc.)

Below are the things that are typically there in my bag

  1. Osprey Radial 34
  2. Topeak Race Rocket HP Pump
  3. Topeak Mini 20 Pro Tool
  4. JetBlack Svelto Photochomatic Sunglasses Red/Black
  5. Spare tubes, puncture kit
  6. Work Laptop - Surface Pro or Lenovo X1 Extreme
  7. Logitech MX Master
  8. Bose QC 35 II - Black
  9. Bullet Journal
  10. Sakura Pigma Micron Pen
  11. Ikea Lunch Box
  12. Snacks (Bar, Banana etc.)

  1. Bike - Giant TCR Advanced Pro 1 2016
  2. Lezyne Macro 1100/ Strip PRO 300 Light Set - Black
  3. Topeak Aero Wedge QuickClip Saddle Bag Medium Size
  4. Kryptonite Keeper 785 Integrated Chain

At work (depending on the client I am with), I usually have access to a bike park area and also showers. I use the Kryptonite Keeper 785 Integrated Chain to secure it to the racks. On weekends, when I go for longer rides, I use the Topeak Aero Wedge QuickClip Saddle Bag to carry the mini tool, puncture kit, extra tubes, and keys, etc. Since that is a quick release, it is easy to remove/put on as required.

If you are planning to start commuting to work on your bike, don’t let the list of things put you off from starting. I started with just the bike and added in all these things one by one over the last year. Do you commute to work on your bike? What gear do you use and find helpful? If you do not try to do at least once a week and soon you will enjoy it like me!

A few months back, I got a used Jaybird run headphones to listen to music while running. I enjoy running with those, and they are a perfect fit. It has never dropped off during my runs. However, after a couple of weeks, the right earbuds stopped charging. Here are a few tips that help me get it charged every time - yes need to do one of these or even a combination to get it to charge every time I pop them back on the case.


 

  • Remove the ear tips and open/close lots of times until it decides to charge
  • Clean the tips and the charging case points with an earbud
  • Blow hard into the small gap in the charging case, where the lid locks in. I assume this is happening because of dust accumulating inside the case. Doing this fixes the charging issue in one or two tries and has been the fastest way to get it to charge.

  I love these earphones, but it’s a pain to get them to charge - Hope this helps.

Exercism: A Great Addition To Your Learning Plan

“For the things we have to learn before we can do them, we learn by doing them.” - Aristotle

Learning by doing has been proven effective and is the same when learning a new programming language. However, it is also equally important that you learn the right things and in the correct manner. Having a mentor or a goto person in the field that you are leaning is essential. Exercism brings both together under one place and free of cost.

Exercism is an online platform designed to help improve your coding skills through practice and mentorship.

Exercism is a great site to accompany your journey in learning something new.

  • It has a good set of problems to work your way through the language.
  • A good breakdown of the language concepts covered through the exercises. Exercism gives an overview on what to concentrate next.
  • The best part is the mentor feedback system. Each time you submit a main/core exercise, there is an approval system in place. To me, learning a language is more about understanding the language-specific features and writing more idiomatic code.

Exercism is entirely open source and open to your contributions as well. It takes a lot of effort to maintain something of this standard — also a great effort from the mentors by providing feedback.

Feedback is useful, especially when moving across programming paradigms - like between OO and Functional.

Solving a problem is just one piece of the puzzle. Writing idiomatic code is equally important, especially when learning a new language. Feedbacks such as the above helps you think in the language (in my case, F#).

Don’t wait for feedback approval, but work on side exercises. Twitter or slack channels are also a great way to request feedback. Reading along with a book helps if you need more structured learning. For F# I have been reading the book Real World Functional Programming

I discovered Exercism while learning FSharp. It has definitely helped keep me in the loop. Hope it helps you!

My recent project got me back into some long lost technologies, including Excel sheets, vb scripts, Silverlight, bash scripts, and whatnot. Amongst one of the things was a bash script that imported data from different CSV files to a data store. There were 40-50 different CSV schemas mapped to their corresponding tables to import. I had to repoint these scripts to write to a SQL Server.

Challenges with BCP Utility

The bcp Utility is one of the best possible options to use from a command line to bulk import data from CSV files. However, the bcp utility requires the fields (columns) in the CSV data file to match the order of the columns in the SQL table. The columns counts must also match. Both were not the case in my scenario. The way that you can work around this using bcp is by providing a Format File.

  • CSV file columns must match order and count of the SQL table
  • Requires a Format File otherwise
    • Static generation is not extensible
    • Dynamic generation calls for a better programming language

Format File has an XML and a Non-XML variant, that can be pre-generated or dynamically generated using code. I did not want to pre-generate the format file because I do not own the generation of the CSV files. There could be more files, new columns, and order of columns can change. Dynamic generation involved a lot more work and bash scripts didn’t feel like the appropriate choice. These made me to rewrite that code.

SqlBulkCopy

CSharp being my natural choice for programming language and having excellent support for SQLBulkCopy, I decided to rewrite the existing bash script. SQLBulkCopy is the equivalent of bcp utility and allows to write managed code solutions for similar functionality. SQLBulkCopy can only write data to SQL Server. However, any data source can be used as long as it can be loaded into a DataTable instance.

The ColumnMappings property defines the relationship between columns in the data source and the SQL table. It can be dynamically added reading the headers of the CSV data file and adding them. In my case, the names of the column were the same as that of the table. My first solution involved reading the CSV file and splitting data using comma (“,”).

var lines = File.ReadAllLines(file);
if (lines.Count() == 0) 
    return;

var tableName = GetTableName(file);
var columns = lines[0].Split(',').ToList();
var table = new DataTable();
sqlBulk.ColumnMappings.Clear();

foreach (var c in columns)
{
    table.Columns.Add(c);
    sqlBulk.ColumnMappings.Add(c, c); 
}

for (int i = 1; i < lines.Count() - 1; i++)
{
    var line = lines[i];
    // Explicitly mark empty values as null for SQL import to work
    var row = line.Split(',')
        .Select(a => string.IsNullOrEmpty(a) ? null : a).ToArray();
    table.Rows.Add(row);
}

sqlBulk.DestinationTableName = tableName;
sqlBulk.WriteToServer(table);

CSV file may have empty columns where the associated column in SQL table is NULLABLE. SQLBulkCopy throws errors like below, depending on the column type and if the value is an empty string. The given value of type String from the data source cannot be converted to type <TYPENAME> of the specified target column

Explicitly marking the empty values as null (as in the code above) solves the problem.

CSV File Gotchas

The above code worked all file until there were some CSV files which had comma as a valid value for few of the columns, as shown below.

Id,Name,Address,Qty
1,Rahul,"Castlereagh St, Sydney NSW 2000",10

Splitting on comma no longer works as expected, as it also splits the address into two-part. Using a NuGet package for reading the CSV data made more sense and decided to switch to CSVHelper. Even though CSVHelper is intended to work with strong types, it does work with generic types - thanks to dynamics. Generating types for each CSV format is equivalent to generating a format file required for the bcp utility.

List<dynamic> rows;
List<string> columns;
using (var reader = new StreamReader(file))
using (var csv = new CsvReader(reader))
{
    rows = csv.GetRecords<dynamic>().ToList();
    columns = csv.Context.HeaderRecord.ToList();
}

if (rows.Count == 0)
    return;

var table = new DataTable();
sqlBulk.ColumnMappings.Clear();

foreach (var c in columns)
{
    table.Columns.Add(c);
    sqlBulk.ColumnMappings.Add(c, c);
}

foreach (IDictionary<string, object> row in rows)
{
    var rowValues = row.Values
        .Select(a => string.IsNullOrEmpty(a.ToString()) ? null : a)
        .ToArray();
    table.Rows.Add(rowValues);
}

sqlBulk.DestinationTableName = tableName;
sqlBulk.WriteToServer(table);

With the CSVHelper added in I am now successfully able to import the different CSV file formats into their corresponding tables in SQL Server. Do add in the required error handling around the above functionality. Also, take note of the transaction behavior of SQLBulkCopy. By default, each operation is performed as an isolated operation. On failure, no records are written to the database. A managed transaction can be passed in to manage multiple database operations.

Hope this helps!

In the previous post, we saw how to Enable History for Azure DevOps Variable Groups Using Azure Key Vault. However, there is one issue with using this approach. Whenever a deployment is triggered, it fetches the latest Secret value from the Key Vault. This behaviour might be desirable or not depending on the nature of the Secret.


In this post, we will see how we can use an alternative approach using the Azure Key Vault Pipeline task to fetch Secrets and at the same time, allow us to snapshot variable values against a release.

Secrets in Key Vault

Before we go any further, lets looks look at how Secrets looks like in Key Vault. You can create Secret in Key Vault via various mechanisms including Powershell, cli, portal, etc. From the portal, you can create a new Secret as shown below (from the Secrets section under the Key Vault). A Secret is uniquely identifiable by the name of the Vault, Secret Name, and the version identifier. Without the version identifier, the latest value is used.

Since we have only one Secret Version created, this can be identified using the full SecretName/Identifier or just the SecretName as both are the same.

Depending on how you want your consuming application to get a Secret value you should choose how to refer a Secret - using the name only (to always get the latest version) or using the SecretName/Identifier to get the specific version.

Azure Key Vault Pipeline Task

The Azure Key Vault task can be used to fetch all or a subset of Secrets from the vault and set them as variables that is available in the subsequent tasks of a pipeline. Using the secretsFilter property on the task, it supports either downloading all the Secrets (as of their latest version) or specify a subset of Secret names to download. When specifying the names you choose either of the two formats - SecretName or SecretName/VersionIdentifier.

When using the SecretName the Secret is available against the same name in the subsequent tasks as a Variable. With SecretName/Identifier format the Secret is available as a variable with name SecretName/Identifier and also with the name SecretName (excluding the Identifier part).

E.g. If the secretsFilter property is set to ConnectionString/c8b9c1dd4e134118b13568a26f8d9778, two variables will be created after the Azure Key Vault task step - one with name ConnectionString/c8b9c1dd4e134118b13568a26f8d9778 and another one with name ConnectionString.

The application configuration can use either of the two names, mostly ConnectionString. We do not want the application configuration to be tightly coupled with the Secret Identifier in Key Vault.

Azure DevOps Variable Groups

With the Key Vault integration now moved out of the Variable Groups it can either be blank or have the list of SecretNames. For the Azure Key Vault task the variables names can either be passed in as a parameter value specified in the Variable Groups (as shown in the image above) or we can specify the actual names (along with identifiers) as part of the task itself. I prefer to pass it as a variable from the Variable Groups. It can either be on a variable with comma separated values or multiple variables passed as comma separated values in the task. To minimize editing the task on every release I chose to pass it as a single variable with comma separated values (as shown below).

Having the Azure Key Vault task as the first task in the pipeline all subsequent tasks will have the variables from Key Vault available to use - including file transforms and variable substitution options. Using the specific version of the Secret locks down the Secret value at that time to a release created. When the secret value is updated, the variable will need to be updated with the new version identifier for it to use the latest version. If not it will still the older version of the Secret Value. This behaviour is now exactly as with the default variables in the DevOps pipeline - by snapshotting the variable values as part of the release.

The above feature is part of the Azure Key Vault Task as of version 1.0.37.

Using Key Vault via Variable Group does not snapshot configuration values from the vault. When a deployment is triggered the latest value from the Key Vault is used.

A couple of weeks back, I accidentally deleted an Azure DevOps Variable Group at work, and it sucked out half of my day. I was lucky that it was the development environment. Since I had access to the environment, I remoted to the corresponding machines (using the Kudu console) and retrieved the values from the appropriate config files. Even with all access in place, it still took me a couple of hours to fix it. This is not a great place to be.

In this post, let us see how we can link secrets from Key Vault to DevOps Variable Groups and how it helps us from accidental deletes and similar mistakes. Check out my video on Getting started with Key Vault and other related articles if you are new to Key Vault.

Azure Key Vault enables safeguard cryptographic keys and secrets used by the application. It increases security and control over keys and passwords.


Azure DevOps supports linking Secrets from an Azure Key Vault to a Variable Group. When creating a new variable group toggle on the ‘Link secrets from an Azure Key Vault as variables’ option. You can now link the Azure subscription and the associated Key Vault to retrieve the secrets. Clicking the Authorize button next to the Key Vault sets the required permissions on the Key Vault for the Azure Service Connection (that what connects your DevOps account with the Azure subscription).

Azure DevOps Variable Groups and Azure Key Vault

The Add button pops up a dialog as shown below. It allows you to select the Secrets that needs to be available as part of the Variable Group.

Azure DevOps Variable Groups link secrets from Vault

Create an Azure Key Vault for each environment to manage Secrets for that environment. As per the current pricing, creating a Key Vault does not have any cost associated. Cost is based on operations to Key Vault - around USD $0.03/10,000 transactions (at the time of writing.)

Version History for Variable Changes

Key Vault supports versioning and creates a new version of an object (key/secret/certificate) each time it is updated. This helps keep track of previous values. You can set expiry/activation dates on the secret if applicable. Further, by having Expiry Notification for Azure Key Vault set up, you can stay on top of rotating your secrets/certificates on time. The Variable Group refers only to the Secret names in the Key Vault. The secret names are the same as what is in the application configuration file. Every time a release is deployed, it reads the latest value of the Secret from the associated Key Vault and uses that for the deployment. This is different from when defining the variables directly as part of the group, where the variables are snapshot at the time of release.

For every deployment, the latest version of the Secret is read from the associated Key Vault.

Make sure this behavior is acceptable with your deployment scenario before moving to use Key Vault via Variable Groups. However, there is a different plugin that you can use to achieve variable snapshotting even with Key Vault, which I will cover in a separate post.

Handling Accidental Deletes

In case anyone accidentally deletes a variable group in Azure DevOps, it is as simple as cloning one of your other environments and renaming to be Dev Variable group. Mostly it’s the same set of variables across all environments. The actual secret value is not required anymore, as that is managed in Key Vault.

For argument sake, what if I accidentally delete the Key Vault itself?

Good news is Key Vault does have a Recovery Option. Assuming you create the Key Vault with the recovery options set (which you obviously will now), using the EnableSoftDelete parameter from Powershell, you can recover back from any delete action on the vault/key/secret.

Hope this helps save half a day (or even more) of someone (maybe me again) who accidentally deletes a variable group!

When interacting with third-party services over the network, it is good to have a fault handling and resilience strategy in place. Some libraries have built-in capabilities while for others you might have to roll your own.

Below is a piece of code that I came across at one of my clients. It talks to Auth0 API and gets all users. However, Auth0 API has a rate limiting policy. Depending on the API endpoint, the rates limits differ. It also varies based on time and other factors. The HTTP Response Headers contain information on the status of rate limits for the endpoint and are dynamic. The below code does have a constant delay defined between subsequent API calls not to exceed rate limits

public async Task<User[]> GetAllUsers()
{
    var results = new List<User>();
    IPagedList<User> pagedList = null;

    do
    {
        pagedList = await auth0Client.Users.GetAllAsync(
           connection: connectionString,
           page: pagedList?.Paging.Start / PageSize + 1 ?? 0,
           perPage: PageSize,
           includeTotals: true,
           sort: "email:1");

        results.AddRange(pagedList);

        await Task.Delay(THROTTLE_TIME_IN_MS);
    } while (pagedList.Paging.Start + pagedList.Paging.Length < pagedList.Paging.Total);

    return results.ToArray();
}

The delay seems valid when looked in isolation, but when different code flows/apps make calls to the Auth0 API at the same time, this is no longer the case. The logs show that this was the case. There were many Auth0 errors with *429 StatusCode * indicating ‘Too Many Requests’ and ‘Global Rate Limit has reached.’

An obvious fix here might be to re-architect the whole solution to remove this dependency with Auth0 and not make these many API calls in the first place. But let’s accept the solution we have in place and see how we can make it more resilient and fault tolerant. Rate limit Exceptions are an excellent example of transient errors. A transient error is a temporary error that is likely to disappear soon. By definition, it is safe for a client to ignore a transient error and retry the failed operation.

Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.

With Polly added in as a NuGet package, we can define a policy to retry the API request up to 3 times with a 429 status code response. There is also a backoff time added to subsequent requests based on the attempt count and a hardcoded THROTTLE_TIME.

private Polly.Retry.AsyncRetryPolicy GetAuth0RetryPolicy()
{
    return Policy
        .Handle<ApiException>(a => a.StatusCode == (HttpStatusCode)429)
        .WaitAndRetryAsync(
            3, attempt => TimeSpan.FromMilliseconds(
                THROTTLE_TIME_IN_MS * Math.Pow(2, attempt)));
}

The original code to Auth0 using the policy is as below.

...
pagedListResult = await auth0RetryPolicy.ExecuteAsync(() => auth0Client.Users.GetAllAsync(
    connection: connectionString,
    page: pagedListResult?.Paging.Start / PageSize + 1 ?? 0,
    perPage: PageSize,
    includeTotals: true,
    sort: "email:1"));

The calls to Auth0 are now more resilient and fault tolerant. It automatically retries the request if the failure reason is ‘Too Many Requests (429)’. It is an easy win with just a few lines of code. This is just an example of fault handling and retry with the Auth0 API. The same technique can be used with any other services that depend on. You just need to define your own policy and modify the calls to use them. Hope this helps you handle transient errors in your application.

← Previous Posts