Using Key Vault via Variable Group does not snapshot configuration values from the vault. When a deployment is triggered the latest value from the Key Vault is used.

A couple of weeks back, I accidentally deleted an Azure DevOps Variable Group at work, and it sucked out half of my day. I was lucky that it was the development environment. Since I had access to the environment, I remoted to the corresponding machines (using the Kudu console) and retrieved the values from the appropriate config files. Even with all access in place, it still took me a couple of hours to fix it. This is not a great place to be.

In this post, let us see how we can link secrets from Key Vault to DevOps Variable Groups and how it helps us from accidental deletes and similar mistakes. Check out my video on Getting started with Key Vault and other related articles if you are new to Key Vault.

Azure Key Vault enables safeguard cryptographic keys and secrets used by the application. It increases security and control over keys and passwords.

Azure DevOps supports linking Secrets from an Azure Key Vault to a Variable Group. When creating a new variable group toggle on the ‘Link secrets from an Azure Key Vault as variables’ option. You can now link the Azure subscription and the associated Key Vault to retrieve the secrets. Clicking the Authorize button next to the Key Vault sets the required permissions on the Key Vault for the Azure Service Connection (that what connects your DevOps account with the Azure subscription).

Azure DevOps Variable Groups and Azure Key Vault

The Add button pops up a dialog as shown below. It allows you to select the Secrets that needs to be available as part of the Variable Group.

Azure DevOps Variable Groups link secrets from Vault

Create an Azure Key Vault for each environment to manage Secrets for that environment. As per the current pricing, creating a Key Vault does not have any cost associated. Cost is based on operations to Key Vault - around USD $0.03/10,000 transactions (at the time of writing.)

Version History for Variable Changes

Key Vault supports versioning and creates a new version of an object (key/secret/certificate) each time it is updated. This helps keep track of previous values. You can set expiry/activation dates on the secret if applicable. Further, by having Expiry Notification for Azure Key Vault set up, you can stay on top of rotating your secrets/certificates on time. The Variable Group refers only to the Secret names in the Key Vault. The secret names are the same as what is in the application configuration file. Every time a release is deployed, it reads the latest value of the Secret from the associated Key Vault and uses that for the deployment. This is different from when defining the variables directly as part of the group, where the variables are snapshot at the time of release.

For every deployment, the latest version of the Secret is read from the associated Key Vault.

Make sure this behavior is acceptable with your deployment scenario before moving to use Key Vault via Variable Groups. However, there is a different plugin that you can use to achieve variable snapshotting even with Key Vault, which I will cover in a separate post.

Handling Accidental Deletes

In case anyone accidentally deletes a variable group in Azure DevOps, it is as simple as cloning one of your other environments and renaming to be Dev Variable group. Mostly it’s the same set of variables across all environments. The actual secret value is not required anymore, as that is managed in Key Vault.

For argument sake, what if I accidentally delete the Key Vault itself?

Good news is Key Vault does have a Recovery Option. Assuming you create the Key Vault with the recovery options set (which you obviously will now), using the EnableSoftDelete parameter from Powershell, you can recover back from any delete action on the vault/key/secret.

Hope this helps save half a day (or even more) of someone (maybe me again) who accidentally deletes a variable group!

When interacting with third-party services over the network, it is good to have a fault handling and resilience strategy in place. Some libraries have built-in capabilities while for others you might have to roll your own.

Below is a piece of code that I came across at one of my clients. It talks to Auth0 API and gets all users. However, Auth0 API has a rate limiting policy. Depending on the API endpoint, the rates limits differ. It also varies based on time and other factors. The HTTP Response Headers contain information on the status of rate limits for the endpoint and are dynamic. The below code does have a constant delay defined between subsequent API calls not to exceed rate limits

public async Task<User[]> GetAllUsers()
{
    var results = new List<User>();
    IPagedList<User> pagedList = null;

    do
    {
        pagedList = await auth0Client.Users.GetAllAsync(
           connection: connectionString,
           page: pagedList?.Paging.Start / PageSize + 1 ?? 0,
           perPage: PageSize,
           includeTotals: true,
           sort: "email:1");

        results.AddRange(pagedList);

        await Task.Delay(THROTTLE_TIME_IN_MS);
    } while (pagedList.Paging.Start + pagedList.Paging.Length < pagedList.Paging.Total);

    return results.ToArray();
}

The delay seems valid when looked in isolation, but when different code flows/apps make calls to the Auth0 API at the same time, this is no longer the case. The logs show that this was the case. There were many Auth0 errors with *429 StatusCode * indicating ‘Too Many Requests’ and ‘Global Rate Limit has reached.’

An obvious fix here might be to re-architect the whole solution to remove this dependency with Auth0 and not make these many API calls in the first place. But let’s accept the solution we have in place and see how we can make it more resilient and fault tolerant. Rate limit Exceptions are an excellent example of transient errors. A transient error is a temporary error that is likely to disappear soon. By definition, it is safe for a client to ignore a transient error and retry the failed operation.

Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.

With Polly added in as a NuGet package, we can define a policy to retry the API request up to 3 times with a 429 status code response. There is also a backoff time added to subsequent requests based on the attempt count and a hardcoded THROTTLE_TIME.

private Polly.Retry.AsyncRetryPolicy GetAuth0RetryPolicy()
{
    return Policy
        .Handle<ApiException>(a => a.StatusCode == (HttpStatusCode)429)
        .WaitAndRetryAsync(
            3, attempt => TimeSpan.FromMilliseconds(
                THROTTLE_TIME_IN_MS * Math.Pow(2, attempt)));
}

The original code to Auth0 using the policy is as below.

...
pagedListResult = await auth0RetryPolicy.ExecuteAsync(() => auth0Client.Users.GetAllAsync(
    connection: connectionString,
    page: pagedListResult?.Paging.Start / PageSize + 1 ?? 0,
    perPage: PageSize,
    includeTotals: true,
    sort: "email:1"));

The calls to Auth0 are now more resilient and fault tolerant. It automatically retries the request if the failure reason is ‘Too Many Requests (429)’. It is an easy win with just a few lines of code. This is just an example of fault handling and retry with the Auth0 API. The same technique can be used with any other services that depend on. You just need to define your own policy and modify the calls to use them. Hope this helps you handle transient errors in your application.

For one of the web application I was working on, access was to be restricted based on user belonging to a particular Azure AD Group. The application as such did not have any Role Based Functionality. It feels an overhead to set up the Role Based Access when all we want is to restrict users belonging to a particular group.

In this post, let us look at how we can set up Azure AD authentication such that only users of a particular group can authenticate against it and get a token. I used the ASP.NET Core 2.1 Azure AD authentication sample, but this applies to any application trying to get a token from an Azure AD application.

Setting up Azure AD Application

In the Azure Portal, navigate to the AD application used for authentication, under Enterprise Applications (from Azure Active Directory). Turn on ‘User Assignment Required’ under Properties for the application (as shown below).

With User Assignment Required turned on, users must first be assigned to this application before being able to access it. When this is not turned on, any user in the directory can access the application.

Once this is turned on if you try to access your application, it will throw an error indicating the user is not assigned to a role for the application.

Adding User to Role

For users to now be able to access the AD application, they need to be added explicitly to the application. This can be done using the Azure Portal under the ‘Users and groups’ section for the AD application (as shown below).

Users can either be added explicitly to the application or via Groups. Adding a group grants access to all the users within the group. In our scenario, we created an AD group for the application and added users that can access the application to the group. If you have different AD applications per environment (which you should), make sure you do this for all the applications.

Handling Managed Service Identity Service Principal

Even though it is possible to add MSI service principal to an Azure AD Group it does not work as intended. The request was failing to get a token with the error that the user is not assigned to a role. It looks like this is one of the cases where a full service principal is required.

To get this working for an MSI service principal, I had to create a dummy application role for the AD application and grant the MSI service principal that role for the AD application. Check the Using AD Role section in this article for full details on setting this up. Note that in this case, you need to explicitly add in the application roles and grant access for the service principal for each of them.

Only users belonging to the group or have been assigned directly to the AD application can get a token for the AD application and hence access the Web application. This is particularly useful when the application does not have any Role Based functionality and all you want is restrict access to a certain group of people within your directory/organization.

Azure Functions are getting popular, and I start seeing them more at clients. One typical scenario I come across is to authenticate an Azure Function with an Azure Web API. Every time something like this comes up, it means more Azure AD applications, which in turn means more secrets/certificates that need to be managed. But with Managed Service Identity (MSI) feature on Azure, a lot of these secrets and authentication bits can be taken off from our shoulders and left to the platform to manage for us.

In this post let us explore how we can successfully authenticate/authorize an Azure Function with a Web API using AD application and Managed Service Identity and still not have any Secrets/certificates involved in the whole process.

Setting Up the Web API

The Azure hosted Web API is set to use Azure AD authentication based on JWT token. To enable this, I have the below code in the Startup class. I created an AD application and ClientId set up as shown below. Any request to the Web API needs a valid token from the Azure AD application in the request header.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(options =>
    {
        var policy = new AuthorizationPolicyBuilder()
            .RequireAuthenticatedUser()
            .Build();
        options.Filters.Add(new AuthorizeFilter(policy));
    }).SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

    services
        .AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
        .AddJwtBearer(options => 
        {
            options.Audience = Configuration["AzureAd:ClientId"];
            options.Authority = 
                $"{Configuration["AzureAd:Instance"]}{Configuration["AzureAd:TenantId"]}";
        });
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseAuthentication();
    app.UseMvc();
}

Enabling MSI on Azure Function

Managed Serviced Identity (MSI) can be turned on through the Azure Portal. Under ‘Platform features’ for an Azure Function select ‘Identity’ as shown below and turn it on for System Assigned.

A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it’s enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.

Once enabled, you can find the added identity for the Azure function under Enterprise Applications list in the AD directory. Azure internally manages this identity.

Authenticating Function with API

To authenticate with the Web API, we need to present a token from the AD application. Any service principal on the AD can authenticate and retrieve token this and so can out Azure Function with the Identity turned on. Usually authenticating with the Azure AD requires a Client ID/Secret or ClientId?Certificate combination. However, with MSI turned on, Azure manages these credentials for us in the background, and we don’t have to manage it ourselves. By using the AzureServiceTokenProvider class from the Microsoft.Azure.Services.AppAuthentication, NuGet package helps authenticate an MSI enabled resource with the AD.

With AzureServiceTokenProvider class, If no connection string is specified, Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication are tried to get a token. Even if no connection string is specified in code, one can be specified in the AzureServicesAuthConnectionString environment variable.

To access the API, we need to pass the token from AD application as a Bearer token, as shown below.

var target = "<AD App Id of Web API>";
var azureServiceTokenProvider = new AzureServiceTokenProvider();
string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync(target);

var wc = new System.Net.Http.HttpClient();
wc.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
var result = await wc.GetAsync("<Secured API URL>");

Role-Based Authorization For Azure Function MSI

Now that we have the authentication set up between the Azure Function and Web API, we might want to restrict the endpoints on the API the function can call. It is the typical User Authorization scenario, and we can use similar approaches that apply.

Using AD Role

To add an App Role for the MSI function, we first need to add an ‘Application’ role to the AD Application (one that Web API uses to authenticate against). The allowedMemberTypes does allow comma separated values if you are looking to add the same role for User and Application.

"appRoles": [
        {
            "allowedMemberTypes": [
                "Application"
            ],
            "description": "All",
            "displayName": "All",
            "id": "d1c2ade8-98f8-45fd-aa4a-6d06b947c66f",
            "isEnabled": true,
            "lang": null,
            "origin": "Application",
            "value": "All"
        }
    ]

With the role defined, we can add the MSI Service Principal to the application role using New-AzureADServiceAppRoleAssignment cmdlet.

# TenantId required only if multiple tenant exists for login
Connect-AzureAd -TenantId 'TENANT ID' 
# Azure Function Name (Service Principal created will have same name)
# Check under Enterprise Applications
$msiServicePrincipal = Get-AzureADServicePrincipal -SearchString "<Azure Function Name>" 
# AD App Name 
$adApp = Get-AzureADServicePrincipal -SearchString "<AD App Web API>"

New-AzureADServiceAppRoleAssignment -Id $adApp.AppRoles[0].Id `
     -PrincipalId $msiServicePrincipal.ObjectId `
     -ObjectId $msiServicePrincipal.ObjectId `
     -ResourceId $adApp.ObjectId

Using AD Group

In a previous post, we saw how to use Azure AD Groups to provide role-based access. You can add a Service Principal to the AD group either through the portal or code.

az ad group member add 
    --subscription b3c70d42-a0b9-4730-84a4-b0004a31f7b4 
    -g aa762499-6287-4e28-8753-27e90cfd2738 // ADGroup Id
    --member-id bb8920f3-7a76-4d92-9fff-fc10afa7887a // Service Principal Object Id

To verify that the token retrieved using the AzureServiceTokenProvider has the associated claims, decode the token using jwt.io. In this case, I have added both roles and groups for the MSI service principal, and you can see that below (highlighted).

The Web API can now use these claims from the token to determine what functionality needs to be available for the associated roles. Here is a detailed post on how to do that using claims based on Groups.

We need one less set of authentication keys shipped as part of our application by enabling MSI. The infrastructure layer, Azure, handles this for us, which makes building applications a lot easier. Azure supports MSI for a lot more resources where similar techniques can be applied. Hope this helps to authenticate and authorize the Azure Functions accessing your Web API and also help you in discovering more use cases for using Managed Services Identity (MSI).

References:

Using Azure Managed Service Identities with your apps

How To Take iOS App Store Screenshots Using Google Chrome For Cordova Applications

Using Chrome Browser to take screenshots in all resolutions as required by the App Stores.

When submitting an app to the iOS App Store, it now mandates to upload screenshots for iPhone XS Max or the 12.9-inch iPad Pro. I had just moved to a new client and was to push an update for their existing mobile application. I had none of the iOS app development related applications setup and had to get this done as soon as possible. The mobile app is the website packaged as a Cordova application.

In this post, we will see how we can use Google Chrome Browser to take screenshots in the different resolution that the App Store requires for app submission.

Simulate Mobile Devices in Chrome

Use the Chrome DevTools to simulate mobile devices and load the website in mobile view. In this mode, the devices drop-down list shows all the predefined list of mobile device lists. You can switch between them to see how the site will render on various devices.

Only a subset of available devices is shown in the drop-down. All available devices are listed on selecting the Edit button. Only the devices that are ticked in the list shown are visible in the drop-down. One can edit this to add/remove devices from the drop-down list.

Add Custom Device

The iPhone XS Max is a relatively new device and the device setting is not yet available in the predefined list of devices. This could very well be available at the time of reading; however, there can be another device that you are looking for that that is not in the list. In this case, you can add the device to the list using the ‘Add custom device’ button in the Edit screen that lists all the devices (shown above).

iPhone XS Max has a screen size of 1242px x 2688px. Using the actual pixels might render the page size large for your laptop/monitor. You can reduce the size by a factor, Device Pixel Ratio (DPR), and enter that along with the device details. In the example below I have used a DPR of 3 which makes the width the height smaller - 414px x 896px (12423 = 414px; 26883 = 896px)

Capture Screenshot in Native Resolution

To upload the screenshots to the App Store you need them to be in the native resolution, as you would have taken a screenshot in the actual devices. Since the page rendered on the Mobile Device layout is of a different size you cannot simply take a screen capture of the rendered page, since that will be in a different resolution. To capture a screenshot in the native resolution there are two options

From the options menu drop down (that can be accessed from the three vertical dots button) as shown below you can use the ‘Capture Screenshot’ menu item.

Screenshot option is also available in the Command Menu, that is accessible via using the menu option in the DevTools or Control+Shift+P keyboard shortcut. The list of available commands can be filtered and choose the ‘Capture Screenshot’ command to take a screenshot in native resolution.

The generated screenshots using any of the above methods will be in the actual device resolution, in this case, 1242px x 2688px. The screenshots can be uploaded as is to the App Store and submitted for review.

Hope this helps you to generate screenshots for your mobile applications built using Cordova.

Code Signing MSI Installer and DLLs in Azure DevOps

Code Sign using Microsoft SignTool in Azure Devops build pipeline.

Code Signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed.

Code Signing is something that you need to consider if you are distributing installable software packages that are consumed by your consumers. Some of the most obvious examples would be any of the standalone applications that you install on your machine. Code signing provides authenticity of the software package distributed and ensures that it is not tampered with between when it was created and delivered to you. Code Signing usually involves using a Code Signing Certificate which follows the public/private key pair to sign and verify. You can purchase this from all certificate issuing authorities; google should help you choose one.

At one of the recent projects, I had to set up Code Signing for the MSI installer for a windows service. The following artifacts were to be signed: Windows Service Executable, - Dependent DLLs, MSI Installer (that packages the above). We were using the Azure Devops for our build process and wanted to automate the code signing. We are using WIX to generate the MSI installer; Check out my previous post on Building Windows Service Installer on Azure Devops, if you want more details on setting up WIX installer in DevOps.

Since we need to sign the executable, the DLLs and also the installer that packages them, I do this in a two-step process in the build pipeline - first sign the DLLs and executable, after which the installer project is built and signed. It ensures the artifacts included in the installer are also signed. We self host our build agent, so I installed our Code Signing certificate on the machine manually and added the certificate thumbprint as a build variable in DevOps. For a hosted agent, you can upload the certificate as a Secure File and use it from there.

Sign DLLs

The pipeline first builds the whole project to generate all the DLLs and the service executable. Microsoft’s SignTool is used to sign the DLLs and executable. The tool takes in the certificate’s thumbprint as a parameter and also takes in a few other parameters; check the documentation to see what each what parameter does. It does accept wildcards for the files to be signed. If you follow a convention for project/DLL names (which you should), then signing them all can be done in one command.

c:\cert\signtool.exe sign /tr http://timestamp.digicert.com 
    /fd sha256 /td sha256 /sm /sha1 "$(CodeSignCertificateThumbprint)" 
    /d "My Project description"  MyProject.*.dll

Code Signing Azure DevOps

Sign Installer

Now that we have the DLLs and the executable signed, we need to package them in using the WIX project. By default, building a WIX project rebuilds all the dependent assemblies, which will overwrite the above-signed DLLs. To avoid this make sure to pass in parameters to the build command to not build project references (/p:BuildProjectReferences=false), and only package them. The MSI installer on build output can then be signed using the same tool.

Code Signing Azure DevOps

Sign Powershell Scripts

We also had a few Powershell scripts that were packaged along in a separate application. To sign them you can use the Set-AuthenticodeSignature cmdlet. All you need is to get the certificate from the appropriate store and pass it on to the cmdlet along with the files that need to be signed.

$cert = Get-Childitem cert:\LocalMachine\My   `
    | where {$_.Thumbprint -eq "$(CodeSignCertificateThumbprint)"}[0]
Set-AuthenticodeSignature -TimestampServer "http://timestamp.digicert.com" `
    .\Deploy\*.ps1 $cert 

If you are distributing packaged software to your end users to install it is generally a good idea to Code Sign your artifacts and also publish the verifiable hash on your website along with the downloadables.

I hope this helps!

Setting Up Dual 4K Monitors - Dell P2715Q and Dell U2718Q

Setting up two 4k monitors for home office.

Dual 4k

Last week I added another 27’ 4k monitor to my home office. It’s been a while since I have been thinking of having an extra monitor just for the fun of it and seeing if it has any benefits. I had a Dell P2715Q for over a year and was looking to get another one of the same models. However, looks like Dell has stopped this model and was not available at any place in Australia. The recommended alternative is Dell U2718Q, which is a 27’ 4k but with thinner bezels. I also like the color and appearance of the new monitor.

I have monitors in the below order with my laptop on the left (which I mostly leave closed for the moment), the new Dell U2718Q in the center (main working display) and the U2718Q on the right (slightly angled in).

Dual monitor layout

Surface Pro and Dual 4k

My daily work machine is a Surface Pro i7 Model 1796 which has a Mini DisplayPort for outputting the video. With just one Dell monitor connecting it to the Surface was easy and worked great at 60Hz. The only way to connect an extra display off the Surface Pro is to either Daisy Chain the Monitors or have a dock/USB device with more display ports.

The Dell P2715Q does support daisy chaining the monitor; the U2718Q does not. But since there are only two monitors we only need one of the monitors to support MST and also the graphics card on your laptop. MST needs to explicitly enabled on the monitor settings (Menu -> Display -> MST -> Primary). Check out the User’s Guide for more details, Assume this is because it puts the monitor into 30Hz as opposed to the default 60Hz. a

Daisy Chaining is straightforward - Output from the laptop goes to the input of the first (in my case P2715Q as only that supports MST), and from the video out of that connect to input on U2718Q.

Dell MST

However note that with MST turned on both the primary and the secondary monitor will be set to 30Hz at 4k resolution.

MST Modes

Off: Default mode - 4k2k 60Hz with MST function Disabled.
Primary: Set as primary mode at 4K2K 30Hz with MST (DP out) enabled.
Secondary: Set as secondary mode at 4K2K 30Hz with MST (DP out) disabled.

You can confirm the display settings from the ‘View advanced display info’ from the start menu (Windows 10). The Surface Pro runs at 60Hz while the other two monitors are running at 30Hz.

I have come across other people mentioning they have had success connecting one 4k from the surface display port and another one from the Surface dock, both running at 60Hz. However, I am not so keen on getting a dock specific to a device.

There are some options from Targus and a few other brands, but all are a bit costly. For a full blow list of options on connecting a Surface Pro with multiple monitors check out this blog post.

I have been working on 30Hz for the past one week and not finding many issues with it. Primarily my work involves text-based interfaces and not much of videos or image editing. Because of this, I don’t see much difference running on a 30Hz. But given a chance, I would like to bump up the experience to work on 60Hz, but that would mean shelling out some extra dollars and getting a dual 4k dock or switching my work machine to my MacBook Pro.

MacBook Pro (2015)

I have a MacBook Pro (2015) model that I don’t use much these days. Even though it is a bit older, it has much more ports and connectivity than the Surface Pro. It supports two Thunderbolt 2 ports both runs at 60Hz on 4k. It also has an additional HDMI port but supports 4k only at a 30Hz.

Mac book pro dual 4k

If I find much trouble with 4k on 30Hz, I will consider switching over to the MacBook, rather than getting an external dock. Overall having added in an extra monitor does help a lot. The primary monitor usually has Visual Studio or Code, and the second one has Chrome, Teams, Spotify, etc. running. I might consider adding a monitor arm at some time to make arrangement cleaner, but that’s something for later!

Custom Authorization Policy Providers in .Net Core For Checking Multiple Azure AD Security Groups

Extending Azure AD Groups Role based access to support combinations of multiple groups to grant access.

In the post, .Net Core Web App and Azure AD Security Groups Role based access, we saw how to use Azure AD Security Groups to provide Role Based Access for your .Net Core applications. We covered only cases where our Controllers/functions were provided access based on a single Azure AD Security group. At times you might want to extend this to include multiple groups, e.g., A user can edit an order if they belong to Admin OR Manager group or both Admin ANDManager group. In this post, we will see how to achieve that.

Belongs to Multiple Groups

In the previous post, we added different policy per AD Security Group and used that in the Authorize attribute to restrict access to a particular Security Group, say Admin. If you want to limit functionality to users who belong to both Admin AND Manager, you can use two attributes one after the other as shown below.

[Authorize(Policy = "Admin")]
[Authorize(Policy = "Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase
...

The above code looks for policies named ‘Admin’ and ‘Manager’, which we registered on application startup using the services.AddAuthorization call (as shown in the previous post).

Belongs to Any One Group

In cases where you want to restrict access to a controller/function depending on the user being part of at least one of the groups in a list of given groups, e.g., user is either Admin OR Manager. The natural tendency is to use a comma-separated list of values for the Policy as shown below.

[Authorize(Policy = "Admin,Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase
...

The above code looks for a policy named ‘Admin,Manager’ and for it to work you need to add in a policy named the same.

options.AddPolicy(
    "Admin,Manager",
    policy =>
        policy.AddRequirements(new IsMemberOfAnyGroupRequirement(adminGroup, managerGroup));

As you can see, I have modified the IsMemberOfGroupRequirement class from the previous blog post to IsMemberOfAnyGroupRequirement, which now takes in a list of AzureAdGroupConfig. The handler for the requirement (IsMemberOfAnyGroupHandler) is updated to check if the user claims have at least one of the required claim.

public class AzureAdGroupConfig
{
    public string GroupName { get; set; }
    public string GroupId { get; set; }
}

public class IsMemberOfAnyGroupRequirement : IAuthorizationRequirement
{
    public AzureAdGroupConfig[] AzureAdGroupConfigs { get; set; }

    public IsMemberOfAnyGroupRequirement(params AzureAdGroupConfig[] groupConfigs)
    {
        AzureAdGroupConfigs = groupConfigs;
    }
}

public class IsMemberOfAnyGroupHandler : AuthorizationHandler<IsMemberOfAnyGroupRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, IsMemberOfAnyGroupRequirement requirement)
    {
        foreach (var adGroupConfig in requirement.AzureAdGroupConfigs)
        {
            var groupClaim = context.User.Claims
            .FirstOrDefault(claim => claim.Type == "groups" &&
                claim.Value.Equals(
                    adGroupConfig.GroupId, 
                    StringComparison.InvariantCultureIgnoreCase));

            if (groupClaim != null)
            {
                context.Succeed(requirement);
                break;
            }
        }
       
        return Task.CompletedTask;
    }
}

The code now works fine, and users belonging to either Admin OR Manager can now access the AddUsersController functionality.

Custom Authorization Policy Providers

Even though the above code works fine, we had to add a policy specific to ‘Admin,Manager’ combination. These combinations can soon start to grow in a large application and become hard to maintain. You can add a custom authorization attribute along with customizing the policy retrieval to match our needs.

IsMemberOFANyGroupAttribute is a custom authorize attribute that takes in a list of group names and concatenates the names using a known prefix and a separator. The known prefix, POLICY_PREFIX helps us to identify the kind of policy that we are looking at, given just the policy name.

Make sure to choose a SEPARATOR that you know will not be there in your group names.

public class IsMemberOfAnyGroupAttribute : AuthorizeAttribute
{
    public const string POLICY_PREFIX = "IsMemberOfAnyGroup";
    public const string SEPARATOR = "_";

    private string[] _groups;

    public IsMemberOfAnyGroupAttribute(params string [] groups)
    {
        _groups = groups;
        var groupsName = string.Join(SEPARATOR, groups);
        Policy = $"{POLICY_PREFIX}{groupsName}";
    }
}

To create a custom policy provider inherit from IAuthorizationPolicyProvider or the default implementation available, DefaultAuthorizationPolicyProvider. Since I wanted to fall back to the default policies available first and then provide custom policies, I am inheriting from DefaultAuthorizationPolicyProvider. The only parameter available to us is the policyName, which is used to resolve the appropriate policy. The code first checks for any default policies available, those which are explicitly registered. I assume any policies referred to by the default Authorize attribute will have an associated Policy registered explicitly. For policies with the known prefix, POLICY_PREFIX we extract out the group names and build a new IsMemberOfAnyGroupRequirement dynamically.

public class ADGroupsPolicyProvider : DefaultAuthorizationPolicyProvider
{
    private List<AzureAdGroupConfig> _adGroupConfigs;

    public ADGroupsPolicyProvider(
        IOptions<AuthorizationOptions> options,
        List<AzureAdGroupConfig> adGroupConfigs): base(options)
    {
        _adGroupConfigs = adGroupConfigs;
    }

    public override async Task<AuthorizationPolicy> GetPolicyAsync(string policyName)
    {
        var policy = await base.GetPolicyAsync(policyName);

        if (policy == null &&
            policyName.StartsWith(
                IsMemberOfAnyGroupAttribute.POLICY_PREFIX,
                StringComparison.InvariantCultureIgnoreCase))
        {
            var groups = policyName
                .Replace(IsMemberOfAnyGroupAttribute.POLICY_PREFIX, string.Empty)
                .Split(
                    new string[] { IsMemberOfAnyGroupAttribute.SEPARATOR },
                    StringSplitOptions.RemoveEmptyEntries);

            var groupConfigs = (from groupName in groups
                                  join groupConfig in _adGroupConfigs
                                  on groupName equals groupConfig.GroupName
                                  select groupConfig)
                                 .ToArray();

           policy = new AuthorizationPolicyBuilder()
                .AddRequirements(new IsMemberOfAnyGroupRequirement(groupConfigs))
                .Build();
        }

        return policy;
    }
}

Don’t forget to register the new policy provider at Startup.

services.AddSingleton<IAuthorizationPolicyProvider, ADGroupsPolicyProvider>();

Using the new attribute is the same as before. However, you don’t need to register a policy for ‘Admin,Manager’ explicitly. When the default policy provider cannot find a policy, it will return a dynamic policy with IsMemberOfAnyGroupRequirement having the above two groups. For any of the possible group combinations, this will happen automatically now.

[IsMemberOfAnyGroupAttribute("Admin", "Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase

Hope this helps you extend the Policy-based authorization in ASP.Net Core applications and mix and match with the way you want to enable access for your users.

I have been on and off Pomodoro technique and always wanted to be more consistent following it. A while back I was using Tomighty, a minimalistic Pomodoro timer. However, with Tomighty, I often forget to start the timer and soon stopped using it altogether. Recently when reading through a Productivity tips article I came across Toggl.

Toggl is the leading time tracking app for agencies, teams and small businesses. A simple time tracker with powerful reports and cross-platform functionalities.

Toggl

Pomodoro Tracker

Even though I am interested in tracking time, I am not so keen on the reports and cross-platform functionalities that Toggl provides. Especially after trying to Minimalize my Online Life I am very particular about adding a new app. The one feature that I am interested in with Toggl is the Pomodoro tracker that comes up with the desktop app. Toggl has a mini timer that can float around anywhere in your desktop and is very minimalistic. All it has is a task name with a Start/Stop button and displaying the elapsed time. Within the application settings, you can configure the Pomodoro interval and the break length. The timer stops automatically after the set duration of Pomodoro interval.

Reminder

Another feature with Toggl that I like is about having a reminder to remind you to track time/Pomodoro. You can choose the days/time and interval for the reminder. Any time you are not tracking time a windows notification reminds you to track time. This reminder should help to stick more with following Pomodoro.

Toggle Reminder

Toggl does not have any built-in option to start up when Windows start, but it’s easy enough to add the shortcut to Toggl in the Windows startup folder.

Happy tracking!

Building Windows Service Installer on Azure Devops

Continuosly building a windows installer on Azure DevOps using VdProj or WIX

Recently I was looking into packaging a Windows Service as an MSI installer. I wanted the MSI created in the build pipeline, in this case Azure DevOps, and publish the MSI as a build artifact. The windows service uses .Net Framework and looking around for installer options I found mainly two approaches discussed below.

Visual Studio Installer Projects (*.VdProj)

Microsoft Visual Studio Installer Projects is available as an extension to Visual Studio and provides support for Visual Studio Installer Projects in Visual Studio. By adding this setup project to the solution, you can create a setup file that steps through a wizard and installs your application. If you are looking to how to set up the Installer project, this stackoverflow answer shows you exactly how. Once you have the installer project set up locally and have the MSI file generated on building solution, we can set this up in Azure DevOps pipeline and automate it.

The Visual Studio Installer Projects require a custom build agent.

The only way I could find to get the Installer Project to run and build out an MSI file was to set up a custom build agent. Hosted agents do not support this at the moment. I set the custom agent on a Windows machine and have not tried on any of the other variants. The only tricky thing with setting up the custom agent was step 4 under Prepare Permissions. To find the scope ‘Agent Pools (read, manage)’, make sure you click the ‘Show all/less scopes’ link towards the bottom of the page (as shown in the image below) - At times some things just miss your eyes! Rest was pretty straightforward, and you can have the custom build agent set up in minutes.

Azure DevOps - Custom Agent token setup

In your build pipeline definition make sure to select the new custom agent as your default Agent pool. The Build VS Installer is a custom task that can be used to build your Visual Studio Installer projects (.vdproj files). Since MSBuild cannot be used to build these projects you need to make sure you have Visual Studio installed on the agent with the Installer Projects extension installed. Setting up the custom task is straightforward - you can either choose to build just one particular installer-project in the solution or all of them in the solution.

Azure Devops - Build Pipeline

I ran into the error message An error occurred while validating. HRESULT = ‘8000000A’, when running this build through the pipeline. Soon figured out that this was faced by others in the past. Running the DisableOutOfProcBuild.exe solved the issue. To do this in the pipeline add a command line task (Set EnableOutOfProcBuild step in the image above) and use the scripts based on the appropriate VS version.

Make sure to either select the ‘Create artifact for .msi’ option in the custom build task or manually copy it out to the artifacts directory. The build now generates an MSI every time!

WIX

WiX is an open source project that provides a set of tools that build Windows Installation Packages. The installer packages are XML based, and the learning curve is relatively steep. However, it offers a lot more features and capabilities over the Visual Studio installer project we saw above. Microsoft hosted agents support building WIX projects, and I was able to successfully run them on the Hosted VS2017 agent.

WIX projects can run on the Hosted VS2017 agent. Just this one reason makes WIX a far better choice over VdProj if you are starting fresh.

Azure Devops WIX

If you are running on a custom build agent, you will have to install the Wix toolset for everything to work. The default build task in Azure DevOps is all that is required to build the project as WIX integrates well with MSBuild. As you can see WIX it is easier setup and lesser hassles, so definitely recommend using that path if you are not already with VdProj.

Hope this helps you set up building installer projects on Azure DevOps.

← Previous Posts