Create-react-app is the defacto for most of the websites that I work on these days. In this post, we will see how to set up a build/deploy pipeline for create react app in Azure DevOps. We will be using the YML format for the pipeline here, which makes it possible to have the build definitions as part of the source code.

Build Pipeline

In the DevOps portal, start by creating a new Build pipeline and choose the ‘Node.js with React’ template. By default, it comes with the ‘Install Node.js’ step that installs the required node version and the ‘npm script step’ to execute any custom scripts. The output of the build step must be an artifact to deploy in the Release step. To support this we need to add two steps to the YML file.

  • Install Node.js
  • Build UI (Npm script)
  • Create Archive
  • Publish Artifacts
# Node.js with React
# Build a Node.js project that uses React.
# Add steps that analyze code, save build artifacts, deploy, and more:

  - master

  uiSource: "src/ui"
  uiBuild: "$(uiSource)/build"

  vmImage: "ubuntu-latest"

  - task: NodeTool@0
      versionSpec: "10.x"
    displayName: "Install Node.js"

  - script: |
      pushd $(uiSource)
      npm install
      npm run build
    displayName: "Build UI"

  - task: ArchiveFiles@2
    displayName: Archive
      rootFolderOrFile: "$(uiBuild)"
      includeRootFolder: false
      archiveType: "zip"
      archiveFile: "$(Build.ArtifactStagingDirectory)/ui-$(Build.BuildId).zip"
      replaceExistingArchive: true

  - task: PublishBuildArtifacts@1
    displayName: Publish Artifacts
      PathtoPublish: "$(Build.ArtifactStagingDirectory)"
      ArtifactName: "drop"
      publishLocation: "Container"

The above pipeline generates a zip artifact of the contents of the ‘build’ folder.

Release Pipeline

To release to Azure Web App, create a new release pipeline and add the Azure Web App Task. Link with the appropriate Azure subscription and select the web application to deploy.

Frontend Routing

When using React, you will likely use a routing library like react-router. In this case, the routing library must handle the URLs and not the server hosting the files. The server will fail to server those routes as you probably won’t have anything to interpret those routes. When hosting on IIS (also for Azure Web App on Windows) add a web.config file to the public folder. This file will automatically get packaged at the root of the artifact. The file has a URL Rewrite config that takes any route and points it to the root of the website and have the Index.html file served. Eg. If the web site has a route ‘' and if a user hits this URL directly on the browser, IIS will redirect it to ‘' and have the default file (Index.html) served back to the user. React router will then handle the remaining route and server the appropriate React component for ‘Customer/1223’.

If APIs are part of the same host, then it needs to be excluded from the URL Rewrite. Below config has ’/api’ ignored from being redirected. Same with any URL that matches a file on the server like CSS, js, images, etc.

<?xml version="1.0"?>
                <rule name="React Routes" stopProcessing="true">
                    <match url=".*" />
                    <conditions logicalGrouping="MatchAll">
                        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
                        <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
                        <add input="{REQUEST_URI}" pattern="^/(api)" negate="true" />
                    <action type="Rewrite" url="/" />
             <mimeMap fileExtension=".otf" mimeType="font/otf" />

Environment/Stage Variables

When deploying to multiple environments like (Test, Staging, Production), I like to have the configs as part of the Azure DevOps Variable Groups. It allows having all the configuration for the application in one place and easier to manage. These variables are to be replaced in the build artifact at the time of release based on the environment it is getting released. One way to handle this is to have a script tag in ‘Index.html’ file as below.

    window.BookingConfig = {
      searchUrl: "",
      bookingUrl: "",
      isDevelopment: true,
      imageServer: ""
  <meta charset="utf-8" />
  <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />

This file has the configuration for local development, allowing any developer on the team to pull down the source code and start running the application. Also add an ‘Index.release.html’ file, which is same as Index.html but with placeholders for the variables. In the example, isDevelopment is an optional config and is false by default, hence not specified in the Index.release.html file.

    window.BookingConfig = {
      searchUrl: "#{SearchUrl}#",
      bookingUrl: "#{BookingUrl}#",
      imageServer: "#{ImageServer}#"
  <meta charset="utf-8" />
  <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />

In the build step, add a command-line task to replace Index.release.html as Index.html.

This step must be before the npm step that builds the application to have the correct Index.html file packaged as part of the artifact.

- task: CmdLine@2
    script: |
      echo Replace Index.html with Index.release.html
      del Index.html
      ren Index.release.html Index.html
    workingDirectory: "$(uiSource)/public"

In the release step, add the Replace Tokens task to replace tokens in the new Index.html file (Index.release.html in source control). Specify the appropriate root directory and the Target files to have variables replaced. By default, the Token prefix and suffix are ‘#{’ and ‘}#’. Add a new variable group for each environment/stage (Test, Staging, and Prod). Add the variables to the group and associate it to the appropriate stage in the release pipeline. The task will replace the configs from the Variable Groups at the time of release.

I hope this helps you to set up a Build/Release pipeline for your create-react-app!

2019: What Went Well, What Didn't and Goals

A short recap of the year that is gone by and looking forward!

Another year has gone by so fast, and it is again time to do a year review.


2019 has been a fantastic year with lots of new learning, blogging, reading, running, and cycling. I started creating content for YouTube. Travel and Swimming did not go as planned. Looking forward to 2020!

What went well

Running and Cycling

I did lots of running and cycling again this year. I wanted to do a couple of events (including a full marathon); however, that did not happen. The only event I did was the Brisbane to Gold Coast 100k cycling, which was my first 100k cycling and a great experience. I got a Tacx Neo towards the end of the year and looking to start using it for structured training in the coming years. For running following the FIRST Running Plan has helped me a lot to improve on my average pace.

'Strava Summary'

Blogging and YouTube

I was a lot more consistent with the number of posts this year. Except for October (while I was on vacation), I had a minimum of 2 posts every month. I am also trying to complement the blog posts with YouTube videos and be more regular at it. I have published 10+ videos since August and trying to build up my channel and content. Subscribe here if you are interested to know every time a new video is published.

Learning and Reading

I stumbled across Exercism during the year and found the FSharp track interesting. I completed the core exercises on the track. CSS is something I have always struggled. Towards the end of this year, I took the Advanced CSS and Sass course on Udemy. I am halfway through the course and finding it extremely useful. It helped me heaps to get going with CSS and SASS. I did want to build the Key Vault Explorer; however that never took off.

As for reading, I had set the goal of 10 books for this year and happy to have finished 11 books. I highly recommend the book Digital Minimalism and the Bullet Journal Method. Here is what I have been experimenting after reading Digital Minimalism and how I have been using the Bullet Journal methodology.

What didn’t go well

I happy this year with having set out with the right goals and being able to meet most of them. Here are some things that could have been better.

  • Swimming It’s been almost two years since I have been on and off with swimming. I have come a long way forward, however, not still to the point where I am comfortable to say I know swimming well.

  • Travel Two trips back to India took most of my vacation time. We also visited Bundaberg and Rockhampton - places within Brisbane. However, we did not make any other international trips.

Goals for 2020

  • Reading Read 12 books - Bumping up two books from the last year’s challenge. I want to add more variety to the books.

  • Blogging and Youtube 3 posts and 3 videos every month. Try and build up a niche/specialization. It is something I have wanted to do for a long time but never happened.

  • Tri Sports Complete a Marathon. Focus more on swimming. Complete a training program on my Tacx Neo with Trainer Road.

  • Learning Learn about Containers and SAFE stack.

I started with Bullet Journaling in 2019, and it has been helping me a lot with planning and organizing myself. I plan to use the same in 2020 and have got a new Journal, all ready to go.

Wishing you all a Happy and Prosperous New Year!

While playing around with the Windows Terminal, I had set up Aliasing to enable alias for commonly used commands.

For e.g. Typing in s implies git status.

I wanted to create new command aliases from the command line itself, instead of opening up the script file and modifying it manually. So I created a PowerShell function for it.

$aliasFilePath = "<Alias file path>"

function New-CommandAlias {

    $functionFormat = "function $commandName { & $command $args }
New-Alias -Name $commandAlias -Value $commandName -Force -Option AllScope"

    $newLine = [Environment]::NewLine
    Add-Content -Path $aliasFilePath -Value "$newLine$functionFormat"

. $aliasFilePath

The script does override existing alias with the same name. Use the ‘Get-Alias’ cmdlet to find existing aliases.

The above script writes a new function and maps it to the alias command using the existing New-Alias cmdlet

function Get-GitStatus { & git status -sb $args }
New-Alias -Name s -Value Get-GitStatus -Force -Option AllScope

Add this to your PowerShell profile file (run notepad \$PROFILE) as we did for theming when we set up the windows terminal. In the above script, I write to the ‘\$aliasFIlePath’ and load all the alias from that file using the Dot sourcing operator.

Below are a few sample usages

New-CommandAlias -CommandName "Get-GitStatus" -Command "git status -sb" -CommandAlias "s"
New-CommandAlias -CommandName "Move-ToWorkFolder" -Command "cd C:\Work\" -CommandAlias "mwf"

The full gist is available here. I have tried adding only a couple of commands, and it did work fine. If you find any issues, please drop a comment.

For a long time, I have been using the Cmder as my command line. It was mostly for the ability to copy-paste, open multiple tabs, and the ability to add aliases (shortcut command). I was never particularly interested in other customizations of the command line. However, one of these recent tweets made me explore the new Windows Terminal.

Windows Terminal is a new, modern, feature-rich, productive terminal application for command-line users. It includes many of the features most frequently requested by the Windows command-line community, including support for tabs, rich text, globalization, configurability, theming & styling, and more.

You can install using the command line itself or get it from the Windows Store. I prefer the Windows Store version as it gets automatically updated.


Pressing WIN Key (WIndows Key) + # (the position of the app on the taskbar) works as toggle. If the app is open and selected, it will minimize, if not, it will bring to the front and selects it. If the app is not running, it will start the app.

In my case, Windows Key + 1 launches Terminal, Windows Key + 2 launches Chrome, Windows Key + 3 launches Visual Studio and so on.


To theme the terminal, you need to install two PowerShell modules.

Install-Module posh-git -Scope CurrentUser
Install-Module oh-my-posh -Scope CurrentUser

To load these modules by default on launching PowerShell, update the PowerShell profile. For this run ‘notepad $PROFILE’ from a PowerShell command line. Add the below lines to the end of the file and save. You can choose an existing theme or even make a custom one. You can further customize this as you want. Here is a great example to get started. I use the Paradox theme currently.

Import-Module posh-git
Import-Module oh-my-posh
Set-Theme Paradox

Restart the prompt, and if you see squares or weird-looking characters, you likely need some updated fonts. Head over to Nerd Fonts, where you can browse for them.

Nerd Fonts patches developer targeted fonts with a high number of glyphs (icons). and gives all those cool icons in the prompt.

To make windows Terminal use the new font, update the settings. Click the button with a down arrow right next to the tabs or use Ctrl + , shortcut. It opens the profiles.json setting file where you can update the font face per profile.

"fontFace": "UbuntuMono NF",


I use the command line mostly for interacting with git repositories and like having shorter commands for commonly used commands, like git status, git commit, etc. My previous command-line, Cmder, had a feature to set alias. Similarly, in PowerShell, we can create a function to wrap the git command and then use the New-Alias cmdlet to create an alias. You can find a good list to start with here and modify them as you need. I have the list of alias in a separate file and load it in the Profile as below. Having it in Dropbox allows me to sync it across to multiple devices and have the same alias everywhere.

Use the Dot sourcing operator to run the script in the current scope and make everything in the specified file added to the current scope.

. C:\Users\rahul\Dropbox\poweshell_alias.ps1

The alias does override any existing alias with the same name, so make sure that you use aliases that don’t conflict with anything that you already use. Here is the powershell_alias file that I use.

I no longer use Cmder and enjoy using the new Terminal. I have just scratched the surface of the terminal here, and there are heaps more that you can format, customize, add other shells, etc.

Enjoy the new Terminal!


At work, we usually DbUp changes to SQL Server. We follow certain naming conventions when creating table constraints and Indexes. Here is an example

create table Product
  Id uniqueidentifier not null unique,
  CategoryId uniqueidentifier not null,
  VendorId uniqueidentifier not null,

  constraint PK_Product primary key clustered (Id),
  constraint FK_Product_Category foreign key (CategoryId) references Category (Id),
  constraint FK_Product_Vendor foreign key (VendorId) references Vendor (Id)

create index IX_Product_CategoryId on Product (CategoryId);

I had to rename a table as part of a new feature. I could have just renamed the table name and moved on, but I wanted all the constraints and indexes also renamed to match the name convention. I could not find any easy way to do this and decided to script it.

If you know of a tool that can do this, let know in the comments and stop reading any further 😄.

Since I have been playing around with F# for a while, I chose to write it in that. The SQL Server Management Objects (SMO) provides a collection of objects to manage SQL Server programmatically, and it can be used from F# as well. Using the #I and #r directives, the SMO library path and DLL’s can be referred.

#I @"C:\Program Files\Microsoft SQL Server\140\SDK\Assemblies\";;
#I @"C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies";;
#r "Microsoft.SqlServer.Smo.dll";;
#r "Microsoft.SqlServer.ConnectionInfo.dll";;
#r "Microsoft.SqlServer.Management.Sdk.Sfc.dll";;

The SMO object model is a hierarchy of objects with the Server as the top-level object. Given a server name, we can start navigating through the entire structure and interact with the related objects. Below is how we can narrow down to the table that we want to rename.

let generateRenameScripts (serverName:string) (databaseName:string) (oldTableName:string) newTableName = 
    let server = Server(serverName)
    let db = server.Databases.[databaseName]
    let oldTable = db.Tables |> Seq.cast |> Seq.tryFind (fun (t:Table) -> t.Name = oldTableName)

SMO does allow generating scripts programmatically, very similar to how SSMS allows to right-click on a table and generate relevant scripts. The ScriptingOptions class allows passing in various parameters determining the scripts generated. Below is how I create the drop and create scripts.

let generateScripts scriptingOpitons (table:Table) = 
    let indexes = table.Indexes |> Seq.cast |> Seq.collect (fun (index:Index) -> (index.Script scriptingOpitons |> Seq.cast<string>)) 
    let fks = table.ForeignKeys |> Seq.cast |> Seq.collect (fun (fk:ForeignKey) -> fk.Script scriptingOpitons |> Seq.cast<string>)
    let all = Seq.concat [fks; indexes]
    Seq.toList all

let generateDropScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions(ScriptDrops = true, DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

let generateCreateScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions( DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

For the create scripts, I do a string replace of the old table name with the new table name. The full gist is available here.

Below is what the script generated for renaming the above table from ‘Product’ to ‘ProductRenamed’. This output can further be optimized, passing in the appropriate parameters to the ScriptingOptions class.

let script = generateRenameScripts "(localdb)\\MSSQLLocalDB" "Warehouse" "Product" "ProductRenamed"
File.WriteAllLines (@"C:\Work\Scripts\test.sql", script) |> ignore
ALTER TABLE [Product] DROP CONSTRAINT [FK_Product_Category]
DROP INDEX [IX_Product_CategoryId] ON [Product]
ALTER TABLE [Product] DROP CONSTRAINT [UQ__Product__3214EC065B6D1E82]
EXEC sp_rename 'Product', 'ProductRenamed'
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Category] FOREIGN KEY([CategoryId])
REFERENCES [Category] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Category]
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Vendor] FOREIGN KEY([VendorId])
REFERENCES [Vendor] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Vendor]
CREATE NONCLUSTERED INDEX [IX_ProductRenamed_CategoryId] ON [ProductRenamed]
  [CategoryId] ASC
  [Id] ASC
  [Id] ASC

One thing that is missing at the moment is renaming foreign key references from other tables in the database to this newly renamed table. The FSharp code is possible not at its best, and I still have a lot of influence from C#. If you have any suggestions making better sound off in the comments

Hope this helps and makes it easy to rename a table and update all associated naming conventions.

I am always on the lookout for productivity hacks and new systems to improve the way I work. Given my nature of work is in front of a computer, my productivity tool choices have always been digital. However, at the start of this year, I came across an interesting book Digital Minimalism. After reading the book, I have changed my de-facto relationship with many apps and the phone in general. I discovered the book while I was skimming through a blog post related to Bullet Journaling. Given that Bullet Journaling also favors disconnecting from devices, I decided to give it a try.

Though it does require a journal, Bullet Journal is a methodology. It’s best described as a mindfulness practice disguised as a productivity system. It’s designed to help you organize your what while you remain mindful of your why. The goal of the Bullet Journal is to help its practitioners (Bullet Journalists) live intentional lives, ones that are both productive and meaningful.

If you are new to Bullet Journaling check out the quick introduction to Bullet Journaling and also the Recommended reading at the end of this post.

I have been bullet journaling since February 2019, but it is only recently that I have started finding it more useful and have built a workflow around it. When I started with this method, I was using it more as a reactive journaling tool and mostly capturing things after they happened. Even though I did plan ahead a few things and put it in the journal, I did not have any formalized practice around this. Off late, I came across the Youtube channel by Matt Ragland who shares a lot of tips around bullet journaling and the various strategies that he uses around it to get more productive. I took on some inspirations from it and also from a few other sources and tweaked my existing process.

Journaling Supplies

If you search for Bullet Journaling on the internet, you will likely see a lot of artistic pages, people talking about the various stationary items (pens, stickers, markers, washi tapes, etc.). But if you are like me (not very artistic) and don’t care much about it, ignore all that. All you need is a pen and paper (preferably a book). Even though there is an official bullet journal notebook, this does work on any notebook/journal that you have. I use a dot-grid journal as it gives proper alignment and guides to draw lines and align lists. Below is what I use for bullet journaling.

Most of the non-artistic bullet journalists go by the name ‘Minimalist Bullet Journal,’ - if you are looking for inspiration on the internet.

Brain Dump

If you are starting fresh and trying to organize yourself, one of the first activities that I suggest to do is a Brain Dump. At any point in time, there are a lot of things in my mind and things that I kept committing to myself and others. It is not possible to keep up with everything that I wish to do. So the very first thing to do is to dump everything out onto paper and then decide what needs attention. The Incompletion Trigger List assists in getting everything out of your mind onto paper. It’s a good idea to block some time of yours to perform this exercise and give it all the attention it needs. At times it helps to Slow Down to Go Fast.

Once you have the brain dumped onto paper, try and choose two to four top priority items that you want to focus on. I prefer to choose things with a different theme. E.g., my main three items (in no particular order) are:

  • Fitness (Running, Cycling and Swimming)
  • Blog and Youtube Channel
  • Learning

Monthly Planning

Before the start of every month (usually the last day of the previous month), I plan for the upcoming month and capture some of the key things I want to achieve that month. Try and align things with the top priority items and helps move those items forward. E.g., I might list down the actual blog posts that I am going to publish that month and also the Youtube Videos. That said, I am not always that organized to come up with blog posts upfront - like the month below (in the image). As for fitness, I plan for any events happening or that I want to attend, etc. Usually, I follow some running plan that spans across a couple of months, so I don’t have to work out any such details every month.

I add in a habit tracker for some of the key things that I want to track for that month. E.g., I try to stretch every day after waking up, drink lots of water, make sure I journal, etc. So I track this every day using my habit tracker (More on it on the Daily Planning section). For reminders and things that need to happen on specific days, I put them in Todoist as well. I copy this over into the month highlights, if any, for the current month.

Weekly Planning

Every Sunday evening, I spend 15-20 minutes to map out the upcoming week. It involves reflecting on the week that has passed, carrying over unfinished items, capturing things for the coming week, etc. I capture important meetings at work and any required follow-ups. I create my running workout plans in Garmin Calendar and sync that back with my watch.

Daily Planning

Every day before bed, I spend 5-10 minutes updating my bullet journal and plan for the next day. Looking at the week’s plan, I work out the tasks for the following day and try to break up into smaller items to move each item on the list forward. I update the habit tracker and cross off items that I have achieved for the day. The idea with the habit tracker is to try and not break the chain. Even if I miss one day, I try to make sure I get that done the following. If any new items have come into Todoist, I copy that over for the day.

Sticking to these three planning techniques has helped me be more focused and get more things done. The very act of writing things down and rewriting them again (when undone) forces me to be mindful of what I am committing to and doing. I still struggle on the execution side at times and procrastinate. There are days I don’t get anything done. Daily planning helps me reflect over those days and be more mindful about it the following day. It helps even out the ups and downs over a period and helps me get things done. The very act of committing to paper gives a stronger sense to complete the task and stick with the plan.

I hope this helps you on your journey!

In an earlier post, we saw how to enable Role-Based Access for .Net Core Web applications. We used hardcoded AD Group Id’s in the application as below

"AdGroups": [
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"

To avoid hardcoding the id’s in the application config, we can use the Graph API to query the AD groups at runtime. The GraphServiceClient from the Microsoft.Graph NuGet package can be used to connect to the Graph API. In this post, we will see how to use the API client to retrieve the AD groups. We will see two authentication mechanisms for the Graph API - one using client credentials and also using Managed Service Identity.

Using Client Credentials

To authenticate using Client Id and secret, we need to create an AD App in the Azure portal. Add a new client secret under the ‘Certificates & Secrets’ tab. To access the Graph API, make sure to add permissions under the ‘API permissions’ tab, as shown below. I have added the required permissions to read the AD Groups.

private static async Task<GraphServiceClient> GetGraphApiClient()
    var clientId = "AD APP ID";
    var secret = "AD APP Secret";
    var domain = "";

    var credentials = new ClientCredential(clientId, secret);
    var authContext = new AuthenticationContext($"{domain}/");
    var token = await authContext.AcquireTokenAsync("", credentials);
    var accessToken = token.AccessToken;

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;

    return graphServiceClient;

Using Managed Service Identity

With the client credentials approach, we have to manage the AD app and the associated secrets. To avoid this, we can use Managed Service identity (MSI), and the Azure infrastructure will do this for us automatically. To use MSI, turn on Identity for the Azure Web App from the Azure Portal.

For the MSI service principal to access Microsoft Graph API, we need to assign appropriate permissions. This is not possible through the Azure Portal, and we need to use PowerShell script. As before, we only need permission to read the Azure AD Groups. ‘00000003-0000-0000-c000-000000000000’ is known Application ID of the Microsoft Graph API. Using that, we can filter out the App Roles to read the AD Group permissions.

$graph = Get-AzureADServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
$groupReadPermission = $graph.AppRoles `
    | where Value -Like "Group.Read.All" `
    | Select-Object -First 1

# Use the Object Id as shown in the image above
$msi = Get-AzureADServicePrincipal -ObjectId <WEB APP MSI Identity>

New-AzureADServiceAppRoleAssignment `
    -Id $groupReadPermission.Id `
    -ObjectId $msi.ObjectId ` 
    -PrincipalId $msi.ObjectId `
    -ResourceId $graph.ObjectId

As we have seen in the previous instances with MSI (here and here) we use the AzureServiceTokenProvider to authenticate with MSI to get the token. The ClientId and secret are no longer required. The AzureServiceTokenProvider class tries to get a token using Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication. In our case, when deployed to Azure, the code uses MSI to get the token.

private static async Task<GraphServiceClient> GetGraphApiClient()
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    string accessToken = await azureServiceTokenProvider

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;

    return graphServiceClient;

Getting AD Groups Using Graph Client

With the GraphApiClient, we can use it to get the Groups from the Azure AD as below. These groups can be used to configure the Authorization policy, as shown below.

 public void ConfigureServices(IServiceCollection services)
     services.AddAuthorization(options =>
         var adGroups = GetAdGroups();

         foreach (var adGroup in adGroups)
                 policy =>
                     policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
     services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();

 private static List<AdGroupConfig> GetAdGroups()
     var client = GetGraphApiClient().Result;
     var allAdGroups = new List<AdGroupConfig>();

     var groups = client.Groups.Request().GetAsync().Result;

     while (groups.Count > 0)
                    groups.Select(a => 
                        new AdGroupConfig() { GroupId = a.Id, GroupName = a.DisplayName }));

         if (groups.NextPageRequest != null)
             groups = groups.NextPageRequest.GetAsync().Result;

     return allAdGroups;

The AD groups no longer need to be hardcoded in the application. Also, with Managed Service Identity, we do not need any additional AD app/credentials to be managed as part of the application.

Hope it helps!

Brisbane To Gold Coast Cycle Challenge, B2GC 2019

A short recap of the day - Very well organized event!

Brisbane To Gold Coast Cycle Challenge (B2GC) is a fun ride event - but for me, it was a race, a race against myself. The longest I had ridden before this was 50km. For B2GC, I had decided not to take the rest stops and head straight for it.

The event was very well organized. A big thank you to all the organizers and volunteers.

Things I carried on my bike (Check out my post here for specifics of my bike and accessories I use)

  • 4 Energy gels
  • 1 Oats bar - did not use
  • 1 bottle water + 1 bottle Electrolyte
  • Mini Toolkit + puncture kit + 1 spare tube + mini pump - Had all of these in my Aero Wedge
  • Wallet (Id Card + some cash)
  • Mobile Phone

Things in the bag (handed over at the cloakroom at start site)

  • Thongs/sandals - Wore it after the ride
  • A pair of clothes - did not use

B2GC 2019 was on September 15, 2019 - a warm and sunny day and perfect for riding. I woke up at 4 am and got ready. I had put my bike in the car the previous night and packed all the things. Made sure I had everything I needed. Said goodbye to my wife and started for UQ at 4: 30. It was a 30-minute drive and reached there at around 5 am. Lots of cyclists were already there and getting ready for the early start with the red category. I planned to start with the blue category (< 25km/hr).

Parked my car at P10 - UQ Centre car park as instructed in the web site and that was quite close to the start point. I am not familiar with putting on the bike wheels, and it took around 10 minutes for me. Once all set I set off to the start point. Dropped off my bag at the cloakroom - they took a 2 dollar donation to get the bag over to the finish site at Gold Coast. Hit the loo after a short 10-minute queue. I started the ride at around 6 along with all the other blue bib holders, from the Eleanor Schonell Bridge – aka the Green Bridge.

The markings along the way were quite clear; there is no way someone would lose their way. Throughout the ride, I had fellow riders in my front and back. At most major intersections there were volunteers and police officers stopping the traffic and making way for the cyclists. I have not ridden much with the cycling shoes and cleats and did some face some difficulty clipping off and on at signals. For my work commute, I use regular running shoes, so I don’t have to carry an extra pair for work. In total, I got around 4-5 red stop signals (welcoming for me as I got to stretch my legs). At one signal, I did fumble a bit on my clipped-in side as I came to a stop. Lucky for me, I didn’t fall over.

When I crossed the finish line at Gold Coast, my Garmin showed a little over 91 km. I rode to the back of the finish line through a bikeway to make it a full 100. Though I started with the blue category, I finished as an Orange (with an average of 26.3 km/hr) and pretty happy with my finish time. After the race, there was food and coffee (paid) and lots of stalls. I ate a sausage and rested for a while. I had pre-booked my bus tickets, as part of my B2GC registration, to get back to Brisbane. The bikes were taken in a separate truck and to be dropped off at the same place as you board the bus. I took the 10:30 bus back and arrived in Brisbane around 11:45. The bicycles truck arrived about 10 minutes later, and I was back at my car by 12:15. Loaded the bike back in the car and headed home!

Had a great ride and kudos to everyone who participated in the event!

You can find all the photos I took (and the official ones of me) here

It’s not often that you want to debug into applications running on a Virtual Machine, but not to say that it is never required. Recently at one of my clients, I had to debug into an application running on an Azure Virtual machine. I wanted to debug an application with Azure AD group logic, and my laptop was not domain-joined. This called for remote debugging with the application running on a domain-joined virtual machine.

In this post, we will look at how to set up the Virtual Machine and Visual Studio for remote debugging. If you are interested in watching this in action, check out the video in the link above.

Setting up Virtual Machine

To be able to remote debug to a Virtual Machine it needs to be running the Remote Debugging Monitor (msvsmon). Assuming you have Visual Studio installed on your local machine (which is why you are trying to debug in the first place), you can find msvsmon under the folder - C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\Remote Debugger. The path might slightly different based on the version of Visual Studio and the Subscription (Professional, Community, Enterprise). The above path is for Visual Studio (VS) 2019 Professional version. Copy over the Remote Debugger folder into the virtual machine. Alternatively, you can also install the Remote Debugger tools from the internet. Make sure you download the version that matches the Visual Studio version that you will be using to debug.

Run the msvsmon.exe as an Administrator, and it will prompt you to set up some firewall rules on the Virtual Machine as below.

Once confirmed it adds in the following firewall rules for the x64 and x86 versions of the application.

The Remote Debugging Monitor listens to a port (each Visual Studio version has a different default) - 4024 for VS 2019. This can be changed under Options if needed. For this example I have turned off Authentication as shown below.

Azure Portal Settings

In the Azure Portal, under the Networking tab for the Virtual Machine, add an inbound port rule to open the port that msvsmon is listening - 4024 in this case

Debugging From Visual Studio

Now that everything is set up, we are good to debug the application from our local machine. Make sure the application to be debugged is running in the Virtual Machine. From Debug -> Attach To Process in Visual Studio. Choose Remote (no authentication) under the Connection type and enter the IP address or the fqdn to the VM along with the port number. Shortly you should see the list of applications running on the machine, and you can choose the appropriate app to debug.

Visual Studio is now debugging the application running on the Virtual Machine. Hope this helps and happy debugging!

Cycling To Work: What's in My Bag

The best thing about not being able to remote work is THE COMMUTE

Commuting on a cycle to work is one thing that I look forward to every day. The initial inertia to get started is very high - but trust me, do it once, and you are very likely to continue doing it. I try to commute to work on my Giant TCR, three days a week. The other two days I go for an early morning run, so I take the bus to work. I use an Osprey Radial 34 as my commute bag. The bag in itself is excellent and fits comfortably everything I need for the day. It also provides exceptional riding comfort with the padded mesh back. I would recommend the 34L version if you cannot leave things at work (like a pair of Jeans etc.)

Below are the things that are typically there in my bag

  1. Osprey Radial 34
  2. Topeak Race Rocket HP Pump
  3. Topeak Mini 20 Pro Tool
  4. JetBlack Svelto Photochomatic Sunglasses Red/Black
  5. Spare tubes, puncture kit
  6. Work Laptop - Surface Pro or Lenovo X1 Extreme
  7. Logitech MX Master
  8. Bose QC 35 II - Black
  9. Bullet Journal
  10. Sakura Pigma Micron Pen
  11. Ikea Lunch Box
  12. Snacks (Bar, Banana etc.)

  1. Bike - Giant TCR Advanced Pro 1 2016
  2. Lezyne Macro 1100/ Strip PRO 300 Light Set - Black
  3. Topeak Aero Wedge QuickClip Saddle Bag Medium Size
  4. Kryptonite Keeper 785 Integrated Chain

At work (depending on the client I am with), I usually have access to a bike park area and also showers. I use the Kryptonite Keeper 785 Integrated Chain to secure it to the racks. On weekends, when I go for longer rides, I use the Topeak Aero Wedge QuickClip Saddle Bag to carry the mini tool, puncture kit, extra tubes, and keys, etc. Since that is a quick release, it is easy to remove/put on as required.

If you are planning to start commuting to work on your bike, don’t let the list of things put you off from starting. I started with just the bike and added in all these things one by one over the last year. Do you commute to work on your bike? What gear do you use and find helpful? If you do not try to do at least once a week and soon you will enjoy it like me!

← Previous Posts