Don't Let Entity Framework Fool Your Constructors!

Any state that an object can be in must be representable through the class constructor.

In the previous post, Back To Basics: Constructors and Enforcing Invariants, we saw the importance of having well-defined constructors and how they help us maintain invariants. Most projects I work on use Entity Framework for the database interactions. We usually have the same Domain Model mapped to the database structure using the Fluent API Configuration. The configurations help keep the Domain Model agnostic of the database dependencies and mappings. The fluent configurations “act as a DTO class without needing to define one explicitly” and keeps the Domain classes ‘clean’.

One thing I notice across projects is the class Constructors do not represent all the states that an object can take. It is not possible to create all states that an object can be in using the constructor. Let us look into the Quote class below for an example. It has the following invariants enforced by the constructor.

  • A Quote must have a Customer
  • A newly created quote always starts in Draft Status
public class Quote
    public Guid Id { get; private set; }
    public QuoteStatus Status { get; private set; }
    public Customer Customer { get;}
    public MobilePhone Phone { get; private set; }
    private readonly List<Accessorry> _accessories = new List<Accessorry>();
    public IReadOnlyCollection<Accessorry> Accessories => _accessories;

    private Quote() { }

    public Quote(Guid id, Customer customer)
        Id = id;
        Customer = customer ?? throw new ArgumentNullException(nameof(customer));
        Phone = MobilePhone.Empty;
        Status = QuoteStatus.Draft;

    public void UpdatePhone(MobilePhone phone)
        Phone = phone ?? throw new ArgumentNullException(nameof(phone));

    public void OpenQuote()
        if (Phone == MobilePhone.Empty)
            throw new DomainException("Cannot set quote to open with empty phone");

        Status = QuoteStatus.Open;

You can add a phone to a Quote, open the Quote, and many more such actions (you get the idea). The Quote class is like an ‘Aggregate Root’ that enforces the constraints on its properties through the methods and constructors it exposes. You can see that a Quote cannot be in an Open state without an associated phone.

Below is a sample usage of this class to create and open a Quote within a console application. A new context mimics a new Controller endpoint, in case of a Web application. The below works as expected and allows us to create, add a phone to the Quote.

var quoteId = Guid.NewGuid();
using (var context = new QuoteContext(optionsBuilder.Options))
{ // Create a New Draft Quote
    var customer = new Customer("Rahul", "", "123 Fake Address");
    var quote = new Quote(quoteId, customer);

using (var context = new QuoteContext(optionsBuilder.Options))
{ // Add Phone to the Quote
    var quote = context.Quotes.First(a => a.Id == quoteId);
    var phone = new MobilePhone("IPhone", "X", 1000.00m);

EF Core and its Reflection Magic

With a phone attached to the Quote, we can now Open the Quote as shown below.

using (var context = new QuoteContext(optionsBuilder.Options))
{ // Open quote
    var quote = context.Quotes.First(a => a.Id == quoteId);

We do not have a constructor to create Draft Quote with a Phone. How is EF loading the data?

How is EF loading the data?

EF Core allows private properties and have them populated when retrieving data. It is possible through the magic of reflection and setting properties on the objects even though they are private setters. We don’t notice this unless we write tests or have other use cases in code to create a Quote in different states.

No offense to the magic, don’t get me wrong here. I like it and use it a lot and makes it easier to write code to retrieve data from the database. However, let’s not allow the magic to drive our Constructors and the class definitions. Let’s write a test to make this more evident.

EF Magic Makes Tests Fragile

Below is a test to verify that calling OpenQuote, sets the Quote to Open Status. However, note that we have to call the UpdatePhone method to update the Quote object before calling OpenQuote.

public void OpenQuoteSetsStatusToOpen()
    var customer = new Customer(
        "Rahul", "", "123 Fake Address");
    var quote = new Quote(Guid.NewGuid(), customer);
    var phone = new MobilePhone("IPhone", "X", 1000.00m);


    Assert.Equal(QuoteStatus.Open, quote.Status);

It seems to be a trivial problem in isolation. However, any time we need a Quote object which has anything more than id and a customer, we need to call these methods. To write tests, we need to invoke a series of methods to put it into the correct state. This increases code coupling and makes the tests fragile.

To fix this, we need to add more constructors to Quote class that allows us to create Quote in the desired state. The constructor that we had before now calls on to the new one with the same parameters. The constructor now enforces any non-draft Quote needs an associated phone.

Any state that an object can be in must be representable through the class constructor.

public Quote(Guid id, Customer customer)
    : this(id, customer, MobilePhone.Empty, QuoteStatus.Draft) { }

public Quote(Guid id, Customer customer, MobilePhone phone, QuoteStatus status)
    Id = id;
    Customer = customer ?? throw new ArgumentNullException(nameof(customer));
    Phone = phone ?? throw new ArgumentNullException(nameof(phone));

    if (status != QuoteStatus.Draft && phone == MobilePhone.Empty)
        throw new DomainException($"Cannot set quote to {status} with empty phone");

    Status = status;

We can now rewrite the test to use the new constructor. We don’t need to call the UpdatePhone method here to get the Quote in the correct state.

public void OpenQuoteSetsStatusToOpen()
    var customer = new Customer(
        "Rahul", "", "123 Fake Address");
    var phone = new MobilePhone("IPhone", "X", 1000.00m);
    var quote = new Quote(Guid.NewGuid(), customer, phone, QuoteStatus.Draft);


    Assert.Equal(QuoteStatus.Open, quote.Status);

The constructor will have to be modified when you start adding accessories to the Quote. But I leave that to you. Constructors are the gateway to creating objects. Make sure they are not dependent on other frameworks that you use in the project. Make all states are representable through the constructor and not by invoking functions.

Does your constructor allow representing all states?

DefaultAzureCredential: Unifying How We Get Azure AD Token

Azure Identity library provides Azure Active Directory token authentication support across the Azure SDK

In the past, Azure had different ways to authenticate with the various resources. The Azure SDK’s is bringing this all under one roof and providing a more unified approach to developers when connecting to resources on Azure.

In this post, we will look into the DefaultAzureCredential class that is part of the Azure Identity library. It is the new and unified way to connect and retrieve tokens from Azure Active Directory and can be used along with resources that need them. We will look at how to authenticate and interact with Azure Key Vault and Microsoft Graph API in this post.

The Azure Identity library provides Azure Active Directory token authentication support across the Azure SDK. It provides a set of TokenCredential implementations which can be used to construct Azure SDK clients which support AAD token authentication.

The DefaultAzureCredential is very similar to the AzureServiceTokenProvider class as part of the Microsoft.Azure.Services.AppAuthentication. The DefaultAzureCredential gets the token based on the environment the application is running. The following credential types if enabled will be tried, in order - EnvironmentCredential, ManagedIdentityCredential, SharedTokenCacheCredential, InteractiveBrowserCredential. Some of these options are not enabled by default and needs to be explictly enabled.

Azure Key Vault

When connecting with Key Vault, make sure to provide the identity (Service Principal or Managed Identity) with relevant Access Policies in the Key Vault. It can be added via the Azure portal (or cli, PowerShell, etc.).

Using the Azure Key Vault client library for .NET v4 you can access and retrieve Key Vault Secret as below. The DefaultAzureCredential inherits from TokenCredential, which the SecretClient expects.

var secretClient = new SecretClient(
    new Uri(""),
    new DefaultAzureCredential());
var secret = await secretClient.GetSecretAsync("<SecretName>");

If you are using the version 3 of the KeyVaultClient to connect to Key Vault, you can use the below snippet to connect and retrieve a secret from the Key Vault.

var credential = new DefaultAzureCredential();
var keyVaultClient = new KeyVaultClient(async (authority, resource, scope) =>
    var token = credential.GetToken(
        new Azure.Core.TokenRequestContext(
            new[] { "" }));
    return token.Token;

var secret = await keyVaultClient
    .GetSecretAsync("<Secret Identifier>");

Microsoft Graph API

When connecting with the Graph Api, we can get a token to authenticate using the same DefaultAzureCredential. I am not sure if there is a GraphServiceClient variant that takes in the TokenCredential (similar to SecretsClient). Do drop in the comments if you are aware of one.

var credential = new DefaultAzureCredential();
var token = credential.GetToken(
    new Azure.Core.TokenRequestContext(
        new[] { "" }));

var accessToken = token.Token;
var graphServiceClient = new GraphServiceClient(
    new DelegateAuthenticationProvider((requestMessage) =>
        .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;

Local Development

In your local environment, DefaultAzureCredential uses the shared token credential from the IDE. In the case of Visual Studio, you can configure the account to use under Options -> Azure Service Authentication. By default, the accounts that you use to log in to Visual Studio does appear here. If you have multiple accounts configured, set the SharedTokenCacheUsername property to specify the account to use.

In my case, I have my hotmail address (associated with my Azure subscription) and my work address added to Visual Studio. However, when using my hotmail account to access KeyVault or Graph API, I ran into this issue. Explicitly adding in a new user to my Azure AD and using that from Visual Studio resolved the issue.

I ran into issues when using my Microsoft account, that I use to login to Azure account. Adding in a new user to Azure AD and using that from Visual Studio got it working.

The SharedTokenCacheUsername can be passed into the DefaultAzureCredential using the CredentialOptions, as shown below. I am using the #if DEBUG directive to enable this only on debug build.

var azureCredentialOptions = new DefaultAzureCredentialOptions();
    azureCredentialOptions.SharedTokenCacheUsername = "<AD User Name>";

var credential = new DefaultAzureCredential(azureCredentialOptions);

To make the above source-control friendly, you can move the ‘<AD User Name>’ to your configuration file, so that each team member can set it as required. The same can also be achieved by setting ‘AZURE__USERNAME’ environment variable. Once set make sure to restart Visual Studio to reflect. With the AZURE__USERNAME set you no longer need to explicitly set the SharedTokenCacheUsername.

Set AZURE__USERNAME to avoid having to write the extra code to set the SharedTokenCacheUsername

Alternatively, you can also set Environment variables and specify the ‘AZURE_CLIENT_ID’, ‘AZURE_TENANT_ID’, and ‘AZURE_CLIENT_SECRET’ which will be automatically picked up and used to authenticate. Check out this post on how to get the ClientId/Secret to authenticate.

Hope this helps you get started with the new set of Azure SDK’s!

Thanks to Jon Gallant for reaching out and encouraging me to check out this new set of SDK’s

In C# or any class-based object-oriented language, a Constructor is used to create an object. The constructor is responsible for initializing the object’s data members and establishing the class invariants. A constructor fails and throws an exception when the class invariants are not met. An invariant is an assertion that is always held true.

The invariant must hold to be true after the constructor is finished and at the entry and exit of all public member functions.

Simple Invariants

E.g. If a constructor takes in a string and checks it to be not null before assigning it to its property, the invariant is that the string Value can never be null.

public class Name
    public string Value { get; set; }

    public Name(string name)
        Value = name ?? throw new ArgumentNullException(nameof(name));

However, in the above case, one can easily break the invariant on an instance by setting the Value property to null after creating the object.

var name = new Name("Rahul");
name.Value = null;

Make the set on the Value property private to stop this. The Value property cannot be set directly on the object instance.

public class Name
  public string Value { get; private set; }

However, we can add a new method on Name class as below, which breaks the invariant.

public class Name
  public void PrintName()
        Value = null;

To enforce the invariant, we can either make sure that we never do something like above inside of a class or mark the property as read-only. By marking it read-only, we ensure that it is set only inside the constructor and nowhere else (even within the class). Remember that an invariant must hold to be true after the constructor is finished and at the entry and exit of all public member functions.

public class Name
  public readonly string Value;

By marking it as read-only, we enforce that Value can no longer be set to Null even within the class. The only place you can set the property is the constructor. Name is now an immutable - value cannot be changed after it is created.

Multi-Property Invariants

The NotNull constraint is something we see often and are used to writing. However, those are not the only constraints. The best example is the DateTime class, which enforces that any date created is valid.

var leapYear = new DateTime(2020, 02, 29);

// Throws exception
// Year, Month, and Day parameters describe an un-representable DateTime
var invalid_NotLeapYear = new DateTime(2019, 02, 29);
var invalid =  new DateTime(2020, 02, 30);

Similar checks are possible for custom classes that we write. E.g., let’s take a DateRange class. In addition to StartDate and EndDate not being null, we have an additional invariant here that the end date cannot be less than the start date.

public class DateRange
    public readonly DateTime StartDate;
    public readonly DateTime EndDate;

    public DateRange(DateTime startDate, DateTime endDate)
        // Ignoring null checks
        if (endDate < startDate)
            throw new ArgumentException("End Date cannot be less than Start Date");

        this.StartDate = startDate;
        this.EndDate = endDate;

Business Invariants

Taking this to the next level, we can add business invariants as well. Let’s take an example of a quote for mobile phones and associated accessories. A quote can be in many different states (Draft, Open Accepted, and Expired). There are a few rules associated with the creation of a Quote.

  • A quote must have an associated Customer
  • An Open quote must have an associated Phone
  • Accessories are optional

Adding business constraints to constructors makes illegal states unrepresentable. If you are on .Net 3.0 turn on Nullable Reference Types and you can get these advantages at compile time as well.

public class Quote
    public int Id { get; private set; }
    public QuoteStatus Status { get; private set; }
    public Customer Customer { get; private set; }
    public MobilePhone Phone { get; private set; }
    private readonly List<Accessories> _accessories = new List<Accessories>();
    public IReadOnlyCollection<Accessories> Accessories => _accessories;

    private Quote() { }

    public Quote(int id, Customer customer)
    : this(id, customer, MobilePhone.Empty, QuoteStatus.Draft) { }

    public Quote(int id, Customer customer, MobilePhone phone, QuoteStatus status)
    : this(id, customer, phone, status, new List<Accessories>()) { }

    public Quote(
        int id, Customer customer, MobilePhone phone,
        QuoteStatus status, List<Accessories> accessories)
        Id = id;
        Customer = customer ?? throw new ArgumentNullException(nameof(customer));

        if (status != QuoteStatus.Draft && phone == null)
            throw new DomainException($"Mobile Phone cannot be null when status is {status}");

        Phone = phone;
        Status = status;
        _accessories = accessories ?? new List<Accessories>();

Let’s look at the different constraints that the Quote class enforces

  • The private default constructor makes sure an empty Quote cannot be created.
  • Quote with id and customer parameter forces the Quote to be in Draft status.
  • All the other constructors use the constructor with all the properties. A Quote instance cannot exist without a customer. When a quote is not in the draft state, it must have an associated MobilePhone.

With these checks in place, we can be sure that some of the business constraints are enforced, and the objects cannot be created in an invalid state. We don’t have to make any more assumptions about the Quote object in our code. We can be sure about some of the above-enforced constraints every time we use the Quote class. It helps make the code contracts stronger.

Constructors are the entry points to the instances. Make them fail fast if the state is illegal. It helps remove a lot of unnecessary defensive checks in other areas of our code.

Hope this helps!

Generating PDF: .Net Core and Azure Web Application

Using NReco library to generate PDF files on Azure Web App running .Net Core.

Generating a PDF is one of those features that come along in a while and gets me thinking.

How do I do this now?

Previously I had written about dynamically generating a large PDF from website contents. The PDF library I used that then, did have the limitation of not being able to run on Azure Web App. It was because of Azure sandbox restrictions.

In this post we will look at how we can generate PDF in an Azure Web App and running .Net Core, what the limitations are and some tips and tricks to help with the development. I am using the NReco HTML-to-PDF Generator for .Net, which is a C# wrapper over WkHtmlToPdf.

To use NReco HTML-To-PDF Generator with .Net Core, you need a license.

Generating the PDF

Generate HTML

To generate the PDF, we first need to generate HTML. RazorLight is a template engine based on Razor for .Net Core. It is available as a NuGet package. I am using the latest available pre-release version - 2.0.0-beta4. Razor light supports templates from Files / EmbeddedResources / Strings / Database or Custom Source. The source is configured when setting the RazorLightEngine to be used in the application. For .Net Core, we can inject an instance of IRazorLightEngine for use in the application. The ContentRootPath is from the IWebHostEnvironment that can be injected to the Startup class

var engine = new RazorLightEngineBuilder()


An instance of the engine is used to generate HTML from a razor view. By using UseFileSystemProject function above, RazorLight picks up the templates from the provided file path. I have all the template files under a folder ‘PdfTemplates’. Make sure to set the Template files (*.cshtml), and any associated resource files (CSS and images) to ‘Copy to Output Directory’. RazorLight adds the templates in the path specified and makes them available against a template key. The template key format is different based on the source. e.g., When using filesystem template key is the relative path to the template file from the RootPath

The HtmlGenerationService below takes in a data object and generates the HTML string using the RazorLightEngine. By convention, it expects a template file (*.cshtml) within a folder. E.g., For data type ‘Quote’, it expects a template with key ‘Quote/Quote.cshtml’.

public class HtmlGenerationService : IHtmlGenerationService
    private readonly IRazorLightEngine _razorLightEngine;

    public HtmlGenerationService(IRazorLightEngine razorLightEngine)
        _razorLightEngine = razorLightEngine;
    public async Task<string> Generate<T>(T data)
        var template = typeof(T).Name;
        return await _razorLightEngine.CompileRenderAsync($"{template}/{template}.cshtml", data);

I got the following error - InvalidOperationException: Cannot find reference assembly ‘Microsoft.AspNetCore.Antiforgery.dll’ file for package Microsoft.AspNetCore.Antiforgery and had to set PreserveCompilationReferences and PreserveCompilationContext in the csproj as mentioned here. Make sure to check the FAQ’s if you are facing any error using the library.

Generate PDF

With the HTML generated, we can use the HtmlToPdfConverter, the NReco wrapper class, to convert it to PDF format. The library is free for .Net but needs a paid license for .Net Core. It is available as a NuGet package and does work fine with .Net Core 3.1 as well.

The wkhtmltopdf binaries must be deployed for your target platform(s) (Windows, Linux, or OS X) with your .NET Core app.

With .Net Core, the wkhtmltopdf executable does not get bundled as part of the NuGet package. It is because the executable differs based on the hosting OS environment. Make sure to include the executable and set to be copied to the bin folder. By default, the converter looks for the executable (wkhtmltopdf.exe) under the folder wkhtmltopdf. The path is configurable.

public class PdfGeneratorService : IPdfGeneratorService
    public PdfGeneratorService(
        IHtmlGenerationService htmlGenerationService, NRecoConfig config) {...}

    public async Task<byte[]> Generate<T>(T data)
        var htmlContent = await HtmlGenerationService.Generate(data);
        return ToPdf(htmlContent);

    private byte[] ToPdf(string htmlContent)
        var htmlToPdf = new HtmlToPdfConverter();
        if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
            htmlToPdf.WkHtmlToPdfExeName = "wkhtmltopdf";
        return htmlToPdf.GeneratePdf(htmlContent);

Calling the GeneratePdf function with the HTML string returns the Pdf byte array. The Pdf byte array can be returned as a File or saved for later reference.

public async Task<IActionResult> Get(string id)
    var result = await PdfGenerationService.Generate(model);
    return File(result, "application/pdf", $"Quote - {model.Number}.pdf");


Before using any PDF generation library, make sure you read the associated docs and FAQ’s as most of them have one limitation or the other. It’s about finding the library that fits the purpose and budget.

Must run on a dedicated VM backed plan : NReco does work fine in Azure Web App as long as it in on a dedicated VM-based plan (Basic, Standard, Premium). If you are running on a Free or Shared plan, NReco will not work.

Custom fonts are not supported : On Azure Web App, there is a limitation on the font’s. Custom fonts are ignored, and system-installed fonts are used.

Not all Browser features available : wkhtmltopdf uses Qt WebKit rendering engine to render the HTML into PDF. You will need to play around and see what works and what doesn’t. I have seen this mostly affecting with CSS (as Flexbox and CSS Grid support was unavailalbe in the version I was using).

Development Tips & Tricks

Here are a few things that helped speed up the development of the Razor file.

Render Razor View While Development

Once I had the PDF generation pipeline set up, the challenge was to get the formatting with real-time feedback. I didn’t want to download the PDF and verify every time I made a change.

To see the output of the razor template as and when you make changes, return the HTML content as ContentResult back on the API endpoint. When calling this from a browser, it will automatically render it.

public async Task<IActionResult> Get(string id, [FromQuery]bool? html)
{   ...
    if (html.GetValueOrDefault())
        var htmlResult = await HtmlGenerationService.Generate(model);
        return new ContentResult() {
            Content = htmlResult,
            ContentType = "text/html",
            StatusCode = 200 };

    var result = await PdfGenerationService.Generate(model);
    return File(result, "application/pdf", $"Quote - {model.Number}.pdf");

With caching turned off ( comment out UseMemoryCachingProvider) and using files as the source (UseFileSystemProject ), RazorLightEngineProvider will load the new file every time it renders. Any time you make a change to the razor view, refresh the API endpoint for the updated HTML.

Please make sure the final PDF looks as expected since the local browser might render the HTML different from what wkhtmltopdf uses.

Styles in Sass

I did not want to miss out on writing Sass for CSS but did not want to set up any automated scripts/pipeline for just the templates. Web Compiler, a Visual Studio extension, makes it easy to compile Sass to CSS. Once you have the extension installed, right-click on the SCSS to compile to CSS. It adds a config file to the solution, and from then on automatically compiles when the SCSS file changes.

The next time you come across a feature to generate PDF’s I hope this helps you get started. The source code for this is available here. Set the NRecoConfig in the appsettings.json to start creating PDF’s. I hope this helps!

Recently I was working at a client, and we had to take online payment for the service they provide. There were two options to pay - either in part or in full. When paying in full, the payment included a total amount and a refundable amount. When paying in partial, there is a minimum amount required to be paid at the time of purchase, the remaining amount with a surcharge (optional based on the card used for payment) amount, and a refundable amount.

Initially, I started modeling the data using one interface as below - PaymentOptions. It has a type to indicate partial or full payment. The properties totalRental, payNow and refundableBond are applicable in both scenarios. However, payNow and totalRental are the same in the case of ‘full’ payment. The properties balance, balanceSurcharge, and payLater are only applicable when the payment option is of type ‘partial’.

export interface PaymentOption {
  type: "partial" | "full";
  totalRental: number;
  payNow: number;
  refundableBond: number;
  balance?: number;
  balanceSurcharge?: number;
  payLater?: number;

You can see the problem - I had to explain a lot and is still confusing. It needs a lot of back and forth to understand how these data fit together.

It is not expressive enough!

I am sure when I go back to this code a couple of weeks from now, it will be hard to understand. I bet this will be the same, if not harder, for anyone new who has to look into the same code and maintain it.

I decided to split out the payment options into two different definitions. Sum Types (or Discriminated Union or Algebraic Data Types) are a great way to represent data when they can take multiple options. We have a ‘PaymentOption’ type which can either be a ‘FullPaymentOption or a ‘PartPaymentOption’. We can now have the properties that apply to each scenario together.

You can combine singleton types, union types, type guards, and type aliases to build an advanced pattern called discriminated unions, also known as tagged unions or algebraic data types Or Sum Types.

export type PaymentOption = FullPaymentOption | PartPaymentOption;

export interface FullPaymentOption {
  type: "full";
  totalRental: number;
  payNow: number;
  refundableBond: number;

export interface PartPaymentOption {
  type: "partial";
  totalRental: number;
  payNow: number;
  refundableBond: number;
  balance: number;
  balanceSurcharge?: number;
  payLater: number;

The data is now expressive and indicates what fields apply to the relevant payment option. Since ‘balanceSurcharge’ is optional based on the card type used for payment, I have it as optional on ‘PartPaymentOption’ type.

When using the PaymentOption Sum Type we can conditionally check for the type of option it represents using the ‘type’ property, also referred to as the ‘discriminant’. Once we case it to a specific type, TypeScript is intelligent enough to restrict us to the properties that type has defined. For, e.g. if it a ‘full’ payment, refundableBond (or any of the other properties that are only applicable to a ‘part’ payment option) cannot be accessed. It makes it extremely useful when consuming Sum types and makes it less error-prone.

No longer do we need to keep track of when data will and will not be populated. Having conditional properties on an interface or a class creates confusion. It makes it harder to deal with the data and the various combinations it can take. Tend to avoid this as much as possible. I hope this gives you an idea to take away and implement for your problem.

Create-react-app is the defacto for most of the websites that I work on these days. In this post, we will see how to set up a build/deploy pipeline for create react app in Azure DevOps. We will be using the YML format for the pipeline here, which makes it possible to have the build definitions as part of the source code.

Build Pipeline

In the DevOps portal, start by creating a new Build pipeline and choose the ‘Node.js with React’ template. By default, it comes with the ‘Install Node.js’ step that installs the required node version and the ‘npm script step’ to execute any custom scripts. The output of the build step must be an artifact to deploy in the Release step. To support this we need to add two steps to the YML file.

  • Install Node.js
  • Build UI (Npm script)
  • Create Archive
  • Publish Artifacts
# Node.js with React
# Build a Node.js project that uses React.
# Add steps that analyze code, save build artifacts, deploy, and more:

  - master

  uiSource: "src/ui"
  uiBuild: "$(uiSource)/build"

  vmImage: "ubuntu-latest"

  - task: NodeTool@0
      versionSpec: "10.x"
    displayName: "Install Node.js"

  - script: |
      pushd $(uiSource)
      npm install
      npm run build
    displayName: "Build UI"

  - task: ArchiveFiles@2
    displayName: Archive
      rootFolderOrFile: "$(uiBuild)"
      includeRootFolder: false
      archiveType: "zip"
      archiveFile: "$(Build.ArtifactStagingDirectory)/ui-$(Build.BuildId).zip"
      replaceExistingArchive: true

  - task: PublishBuildArtifacts@1
    displayName: Publish Artifacts
      PathtoPublish: "$(Build.ArtifactStagingDirectory)"
      ArtifactName: "drop"
      publishLocation: "Container"

The above pipeline generates a zip artifact of the contents of the ‘build’ folder.

Release Pipeline

To release to Azure Web App, create a new release pipeline and add the Azure Web App Task. Link with the appropriate Azure subscription and select the web application to deploy.

Frontend Routing

When using React, you will likely use a routing library like react-router. In this case, the routing library must handle the URLs and not the server hosting the files. The server will fail to server those routes as you probably won’t have anything to interpret those routes. When hosting on IIS (also for Azure Web App on Windows) add a web.config file to the public folder. This file will automatically get packaged at the root of the artifact. The file has a URL Rewrite config that takes any route and points it to the root of the website and have the Index.html file served. Eg. If the web site has a route ‘' and if a user hits this URL directly on the browser, IIS will redirect it to ‘' and have the default file (Index.html) served back to the user. React router will then handle the remaining route and server the appropriate React component for ‘Customer/1223’.

If APIs are part of the same host, then it needs to be excluded from the URL Rewrite. Below config has ’/api’ ignored from being redirected. Same with any URL that matches a file on the server like CSS, js, images, etc.

<?xml version="1.0"?>
                <rule name="React Routes" stopProcessing="true">
                    <match url=".*" />
                    <conditions logicalGrouping="MatchAll">
                        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
                        <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
                        <add input="{REQUEST_URI}" pattern="^/(api)" negate="true" />
                    <action type="Rewrite" url="/" />
             <mimeMap fileExtension=".otf" mimeType="font/otf" />

Environment/Stage Variables

When deploying to multiple environments like (Test, Staging, Production), I like to have the configs as part of the Azure DevOps Variable Groups. It allows having all the configuration for the application in one place and easier to manage. These variables are to be replaced in the build artifact at the time of release based on the environment it is getting released. One way to handle this is to have a script tag in ‘Index.html’ file as below.

    window.BookingConfig = {
      searchUrl: "",
      bookingUrl: "",
      isDevelopment: true,
      imageServer: ""
  <meta charset="utf-8" />
  <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />

This file has the configuration for local development, allowing any developer on the team to pull down the source code and start running the application. Also add an ‘Index.release.html’ file, which is same as Index.html but with placeholders for the variables. In the example, isDevelopment is an optional config and is false by default, hence not specified in the Index.release.html file.

    window.BookingConfig = {
      searchUrl: "#{SearchUrl}#",
      bookingUrl: "#{BookingUrl}#",
      imageServer: "#{ImageServer}#"
  <meta charset="utf-8" />
  <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />

In the build step, add a command-line task to replace Index.release.html as Index.html.

This step must be before the npm step that builds the application to have the correct Index.html file packaged as part of the artifact.

- task: CmdLine@2
    script: |
      echo Replace Index.html with Index.release.html
      del Index.html
      ren Index.release.html Index.html
    workingDirectory: "$(uiSource)/public"

In the release step, add the Replace Tokens task to replace tokens in the new Index.html file (Index.release.html in source control). Specify the appropriate root directory and the Target files to have variables replaced. By default, the Token prefix and suffix are ‘#{’ and ‘}#’. Add a new variable group for each environment/stage (Test, Staging, and Prod). Add the variables to the group and associate it to the appropriate stage in the release pipeline. The task will replace the configs from the Variable Groups at the time of release.

I hope this helps you to set up a Build/Release pipeline for your create-react-app!

2019: What Went Well, What Didn't and Goals

A short recap of the year that is gone by and looking forward!

Another year has gone by so fast, and it is again time to do a year review.


2019 has been a fantastic year with lots of new learning, blogging, reading, running, and cycling. I started creating content for YouTube. Travel and Swimming did not go as planned. Looking forward to 2020!

What went well

Running and Cycling

I did lots of running and cycling again this year. I wanted to do a couple of events (including a full marathon); however, that did not happen. The only event I did was the Brisbane to Gold Coast 100k cycling, which was my first 100k cycling and a great experience. I got a Tacx Neo towards the end of the year and looking to start using it for structured training in the coming years. For running following the FIRST Running Plan has helped me a lot to improve on my average pace.

'Strava Summary'

Blogging and YouTube

I was a lot more consistent with the number of posts this year. Except for October (while I was on vacation), I had a minimum of 2 posts every month. I am also trying to complement the blog posts with YouTube videos and be more regular at it. I have published 10+ videos since August and trying to build up my channel and content. Subscribe here if you are interested to know every time a new video is published.

Learning and Reading

I stumbled across Exercism during the year and found the FSharp track interesting. I completed the core exercises on the track. CSS is something I have always struggled. Towards the end of this year, I took the Advanced CSS and Sass course on Udemy. I am halfway through the course and finding it extremely useful. It helped me heaps to get going with CSS and SASS. I did want to build the Key Vault Explorer; however that never took off.

As for reading, I had set the goal of 10 books for this year and happy to have finished 11 books. I highly recommend the book Digital Minimalism and the Bullet Journal Method. Here is what I have been experimenting after reading Digital Minimalism and how I have been using the Bullet Journal methodology.

What didn’t go well

I happy this year with having set out with the right goals and being able to meet most of them. Here are some things that could have been better.

  • Swimming It’s been almost two years since I have been on and off with swimming. I have come a long way forward, however, not still to the point where I am comfortable to say I know swimming well.

  • Travel Two trips back to India took most of my vacation time. We also visited Bundaberg and Rockhampton - places within Brisbane. However, we did not make any other international trips.

Goals for 2020

  • Reading Read 12 books - Bumping up two books from the last year’s challenge. I want to add more variety to the books.

  • Blogging and Youtube 3 posts and 3 videos every month. Try and build up a niche/specialization. It is something I have wanted to do for a long time but never happened.

  • Tri Sports Complete a Marathon. Focus more on swimming. Complete a training program on my Tacx Neo with Trainer Road.

  • Learning Learn about Containers and SAFE stack.

I started with Bullet Journaling in 2019, and it has been helping me a lot with planning and organizing myself. I plan to use the same in 2020 and have got a new Journal, all ready to go.

Wishing you all a Happy and Prosperous New Year!

While playing around with the Windows Terminal, I had set up Aliasing to enable alias for commonly used commands.

For e.g. Typing in s implies git status.

I wanted to create new command aliases from the command line itself, instead of opening up the script file and modifying it manually. So I created a PowerShell function for it.

$aliasFilePath = "<Alias file path>"

function New-CommandAlias {

    $functionFormat = "function $commandName { & $command $args }
New-Alias -Name $commandAlias -Value $commandName -Force -Option AllScope"

    $newLine = [Environment]::NewLine
    Add-Content -Path $aliasFilePath -Value "$newLine$functionFormat"

. $aliasFilePath

The script does override existing alias with the same name. Use the ‘Get-Alias’ cmdlet to find existing aliases.

The above script writes a new function and maps it to the alias command using the existing New-Alias cmdlet

function Get-GitStatus { & git status -sb $args }
New-Alias -Name s -Value Get-GitStatus -Force -Option AllScope

Add this to your PowerShell profile file (run notepad \$PROFILE) as we did for theming when we set up the windows terminal. In the above script, I write to the ‘\$aliasFIlePath’ and load all the alias from that file using the Dot sourcing operator.

Below are a few sample usages

New-CommandAlias -CommandName "Get-GitStatus" -Command "git status -sb" -CommandAlias "s"
New-CommandAlias -CommandName "Move-ToWorkFolder" -Command "cd C:\Work\" -CommandAlias "mwf"

The full gist is available here. I have tried adding only a couple of commands, and it did work fine. If you find any issues, please drop a comment.

For a long time, I have been using the Cmder as my command line. It was mostly for the ability to copy-paste, open multiple tabs, and the ability to add aliases (shortcut command). I was never particularly interested in other customizations of the command line. However, one of these recent tweets made me explore the new Windows Terminal.

Windows Terminal is a new, modern, feature-rich, productive terminal application for command-line users. It includes many of the features most frequently requested by the Windows command-line community, including support for tabs, rich text, globalization, configurability, theming & styling, and more.

You can install using the command line itself or get it from the Windows Store. I prefer the Windows Store version as it gets automatically updated.


Pressing WIN Key (WIndows Key) + # (the position of the app on the taskbar) works as toggle. If the app is open and selected, it will minimize, if not, it will bring to the front and selects it. If the app is not running, it will start the app.

In my case, Windows Key + 1 launches Terminal, Windows Key + 2 launches Chrome, Windows Key + 3 launches Visual Studio and so on.


To theme the terminal, you need to install two PowerShell modules.

Install-Module posh-git -Scope CurrentUser
Install-Module oh-my-posh -Scope CurrentUser

To load these modules by default on launching PowerShell, update the PowerShell profile. For this run ‘notepad $PROFILE’ from a PowerShell command line. Add the below lines to the end of the file and save. You can choose an existing theme or even make a custom one. You can further customize this as you want. Here is a great example to get started. I use the Paradox theme currently.

Import-Module posh-git
Import-Module oh-my-posh
Set-Theme Paradox

Restart the prompt, and if you see squares or weird-looking characters, you likely need some updated fonts. Head over to Nerd Fonts, where you can browse for them.

Nerd Fonts patches developer targeted fonts with a high number of glyphs (icons). and gives all those cool icons in the prompt.

To make windows Terminal use the new font, update the settings. Click the button with a down arrow right next to the tabs or use Ctrl + , shortcut. It opens the profiles.json setting file where you can update the font face per profile.

"fontFace": "UbuntuMono NF",


I use the command line mostly for interacting with git repositories and like having shorter commands for commonly used commands, like git status, git commit, etc. My previous command-line, Cmder, had a feature to set alias. Similarly, in PowerShell, we can create a function to wrap the git command and then use the New-Alias cmdlet to create an alias. You can find a good list to start with here and modify them as you need. I have the list of alias in a separate file and load it in the Profile as below. Having it in Dropbox allows me to sync it across to multiple devices and have the same alias everywhere.

Use the Dot sourcing operator to run the script in the current scope and make everything in the specified file added to the current scope.

. C:\Users\rahul\Dropbox\poweshell_alias.ps1

The alias does override any existing alias with the same name, so make sure that you use aliases that don’t conflict with anything that you already use. Here is the powershell_alias file that I use.

I no longer use Cmder and enjoy using the new Terminal. I have just scratched the surface of the terminal here, and there are heaps more that you can format, customize, add other shells, etc.

Enjoy the new Terminal!


At work, we usually DbUp changes to SQL Server. We follow certain naming conventions when creating table constraints and Indexes. Here is an example

create table Product
  Id uniqueidentifier not null unique,
  CategoryId uniqueidentifier not null,
  VendorId uniqueidentifier not null,

  constraint PK_Product primary key clustered (Id),
  constraint FK_Product_Category foreign key (CategoryId) references Category (Id),
  constraint FK_Product_Vendor foreign key (VendorId) references Vendor (Id)

create index IX_Product_CategoryId on Product (CategoryId);

I had to rename a table as part of a new feature. I could have just renamed the table name and moved on, but I wanted all the constraints and indexes also renamed to match the name convention. I could not find any easy way to do this and decided to script it.

If you know of a tool that can do this, let know in the comments and stop reading any further 😄.

Since I have been playing around with F# for a while, I chose to write it in that. The SQL Server Management Objects (SMO) provides a collection of objects to manage SQL Server programmatically, and it can be used from F# as well. Using the #I and #r directives, the SMO library path and DLL’s can be referred.

#I @"C:\Program Files\Microsoft SQL Server\140\SDK\Assemblies\";;
#I @"C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies";;
#r "Microsoft.SqlServer.Smo.dll";;
#r "Microsoft.SqlServer.ConnectionInfo.dll";;
#r "Microsoft.SqlServer.Management.Sdk.Sfc.dll";;

The SMO object model is a hierarchy of objects with the Server as the top-level object. Given a server name, we can start navigating through the entire structure and interact with the related objects. Below is how we can narrow down to the table that we want to rename.

let generateRenameScripts (serverName:string) (databaseName:string) (oldTableName:string) newTableName = 
    let server = Server(serverName)
    let db = server.Databases.[databaseName]
    let oldTable = db.Tables |> Seq.cast |> Seq.tryFind (fun (t:Table) -> t.Name = oldTableName)

SMO does allow generating scripts programmatically, very similar to how SSMS allows to right-click on a table and generate relevant scripts. The ScriptingOptions class allows passing in various parameters determining the scripts generated. Below is how I create the drop and create scripts.

let generateScripts scriptingOpitons (table:Table) = 
    let indexes = table.Indexes |> Seq.cast |> Seq.collect (fun (index:Index) -> (index.Script scriptingOpitons |> Seq.cast<string>)) 
    let fks = table.ForeignKeys |> Seq.cast |> Seq.collect (fun (fk:ForeignKey) -> fk.Script scriptingOpitons |> Seq.cast<string>)
    let all = Seq.concat [fks; indexes]
    Seq.toList all

let generateDropScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions(ScriptDrops = true, DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

let generateCreateScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions( DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

For the create scripts, I do a string replace of the old table name with the new table name. The full gist is available here.

Below is what the script generated for renaming the above table from ‘Product’ to ‘ProductRenamed’. This output can further be optimized, passing in the appropriate parameters to the ScriptingOptions class.

let script = generateRenameScripts "(localdb)\\MSSQLLocalDB" "Warehouse" "Product" "ProductRenamed"
File.WriteAllLines (@"C:\Work\Scripts\test.sql", script) |> ignore
ALTER TABLE [Product] DROP CONSTRAINT [FK_Product_Category]
DROP INDEX [IX_Product_CategoryId] ON [Product]
ALTER TABLE [Product] DROP CONSTRAINT [UQ__Product__3214EC065B6D1E82]
EXEC sp_rename 'Product', 'ProductRenamed'
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Category] FOREIGN KEY([CategoryId])
REFERENCES [Category] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Category]
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Vendor] FOREIGN KEY([VendorId])
REFERENCES [Vendor] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Vendor]
CREATE NONCLUSTERED INDEX [IX_ProductRenamed_CategoryId] ON [ProductRenamed]
  [CategoryId] ASC
  [Id] ASC
  [Id] ASC

One thing that is missing at the moment is renaming foreign key references from other tables in the database to this newly renamed table. The FSharp code is possible not at its best, and I still have a lot of influence from C#. If you have any suggestions making better sound off in the comments

Hope this helps and makes it easy to rename a table and update all associated naming conventions.

← Previous Posts