For one of the web application I was working on, access was to be restricted based on user belonging to a particular Azure AD Group. The application as such did not have any Role Based Functionality. It feels an overhead to set up the Role Based Access when all we want is to restrict users belonging to a particular group.

    In this post, let us look at how we can set up Azure AD authentication such that only users of a particular group can authenticate against it and get a token. I used the ASP.NET Core 2.1 Azure AD authentication sample, but this applies to any application trying to get a token from an Azure AD application.

    Setting up Azure AD Application

    In the Azure Portal, navigate to the AD application used for authentication, under Enterprise Applications (from Azure Active Directory). Turn on ‘User Assignment Required’ under Properties for the application (as shown below).

    With User Assignment Required turned on, users must first be assigned to this application before being able to access it. When this is not turned on, any user in the directory can access the application.

    Once this is turned on if you try to access your application, it will throw an error indicating the user is not assigned to a role for the application.

    Adding User to Role

    For users to now be able to access the AD application, they need to be added explicitly to the application. This can be done using the Azure Portal under the ‘Users and groups’ section for the AD application (as shown below).

    Users can either be added explicitly to the application or via Groups. Adding a group grants access to all the users within the group. In our scenario, we created an AD group for the application and added users that can access the application to the group. If you have different AD applications per environment (which you should), make sure you do this for all the applications.

    Handling Managed Service Identity Service Principal

    Even though it is possible to add MSI service principal to an Azure AD Group it does not work as intended. The request was failing to get a token with the error that the user is not assigned to a role. It looks like this is one of the cases where a full service principal is required.

    To get this working for an MSI service principal, I had to create a dummy application role for the AD application and grant the MSI service principal that role for the AD application. Check the Using AD Role section in this article for full details on setting this up. Note that in this case, you need to explicitly add in the application roles and grant access for the service principal for each of them.

    Only users belonging to the group or have been assigned directly to the AD application can get a token for the AD application and hence access the Web application. This is particularly useful when the application does not have any Role Based functionality and all you want is restrict access to a certain group of people within your directory/organization.

    Azure Functions are getting popular, and I start seeing them more at clients. One typical scenario I come across is to authenticate an Azure Function with an Azure Web API. Every time something like this comes up, it means more Azure AD applications, which in turn means more secrets/certificates that need to be managed. But with Managed Service Identity (MSI) feature on Azure, a lot of these secrets and authentication bits can be taken off from our shoulders and left to the platform to manage for us.

    In this post let us explore how we can successfully authenticate/authorize an Azure Function with a Web API using AD application and Managed Service Identity and still not have any Secrets/certificates involved in the whole process.

    Setting Up the Web API

    The Azure hosted Web API is set to use Azure AD authentication based on JWT token. To enable this, I have the below code in the Startup class. I created an AD application and ClientId set up as shown below. Any request to the Web API needs a valid token from the Azure AD application in the request header.

    public void ConfigureServices(IServiceCollection services)
        services.AddMvc(options =>
            var policy = new AuthorizationPolicyBuilder()
            options.Filters.Add(new AuthorizeFilter(policy));
            .AddJwtBearer(options => 
                options.Audience = Configuration["AzureAd:ClientId"];
                options.Authority = 
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)

    Enabling MSI on Azure Function

    Managed Serviced Identity (MSI) can be turned on through the Azure Portal. Under ‘Platform features’ for an Azure Function select ‘Identity’ as shown below and turn it on for System Assigned.

    A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it’s enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.

    Once enabled, you can find the added identity for the Azure function under Enterprise Applications list in the AD directory. Azure internally manages this identity.

    Authenticating Function with API

    To authenticate with the Web API, we need to present a token from the AD application. Any service principal on the AD can authenticate and retrieve token this and so can out Azure Function with the Identity turned on. Usually authenticating with the Azure AD requires a Client ID/Secret or ClientId?Certificate combination. However, with MSI turned on, Azure manages these credentials for us in the background, and we don’t have to manage it ourselves. By using the AzureServiceTokenProvider class from the Microsoft.Azure.Services.AppAuthentication, NuGet package helps authenticate an MSI enabled resource with the AD.

    With AzureServiceTokenProvider class, If no connection string is specified, Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication are tried to get a token. Even if no connection string is specified in code, one can be specified in the AzureServicesAuthConnectionString environment variable.

    To access the API, we need to pass the token from AD application as a Bearer token, as shown below.

    var target = "<AD App Id of Web API>";
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync(target);
    var wc = new System.Net.Http.HttpClient();
    wc.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
    var result = await wc.GetAsync("<Secured API URL>");

    Role-Based Authorization For Azure Function MSI

    Now that we have the authentication set up between the Azure Function and Web API, we might want to restrict the endpoints on the API the function can call. It is the typical User Authorization scenario, and we can use similar approaches that apply.

    Using AD Role

    To add an App Role for the MSI function, we first need to add an ‘Application’ role to the AD Application (one that Web API uses to authenticate against). The allowedMemberTypes does allow comma separated values if you are looking to add the same role for User and Application.

    "appRoles": [
                "allowedMemberTypes": [
                "description": "All",
                "displayName": "All",
                "id": "d1c2ade8-98f8-45fd-aa4a-6d06b947c66f",
                "isEnabled": true,
                "lang": null,
                "origin": "Application",
                "value": "All"

    With the role defined, we can add the MSI Service Principal to the application role using New-AzureADServiceAppRoleAssignment cmdlet.

    # TenantId required only if multiple tenant exists for login
    Connect-AzureAd -TenantId 'TENANT ID' 
    # Azure Function Name (Service Principal created will have same name)
    # Check under Enterprise Applications
    $msiServicePrincipal = Get-AzureADServicePrincipal -SearchString "<Azure Function Name>" 
    # AD App Name 
    $adApp = Get-AzureADServicePrincipal -SearchString "<AD App Web API>"
    New-AzureADServiceAppRoleAssignment -Id $adApp.AppRoles[0].Id `
         -PrincipalId $msiServicePrincipal.ObjectId `
         -ObjectId $msiServicePrincipal.ObjectId `
         -ResourceId $adApp.ObjectId

    Using AD Group

    In a previous post, we saw how to use Azure AD Groups to provide role-based access. You can add a Service Principal to the AD group either through the portal or code.

    az ad group member add 
        --subscription b3c70d42-a0b9-4730-84a4-b0004a31f7b4 
        -g aa762499-6287-4e28-8753-27e90cfd2738 // ADGroup Id
        --member-id bb8920f3-7a76-4d92-9fff-fc10afa7887a // Service Principal Object Id

    To verify that the token retrieved using the AzureServiceTokenProvider has the associated claims, decode the token using In this case, I have added both roles and groups for the MSI service principal, and you can see that below (highlighted).

    The Web API can now use these claims from the token to determine what functionality needs to be available for the associated roles. Here is a detailed post on how to do that using claims based on Groups.

    We need one less set of authentication keys shipped as part of our application by enabling MSI. The infrastructure layer, Azure, handles this for us, which makes building applications a lot easier. Azure supports MSI for a lot more resources where similar techniques can be applied. Hope this helps to authenticate and authorize the Azure Functions accessing your Web API and also help you in discovering more use cases for using Managed Services Identity (MSI).


    Using Azure Managed Service Identities with your apps

    How To Take iOS App Store Screenshots Using Google Chrome For Cordova Applications

    Using Chrome Browser to take screenshots in all resolutions as required by the App Stores.

    When submitting an app to the iOS App Store, it now mandates to upload screenshots for iPhone XS Max or the 12.9-inch iPad Pro. I had just moved to a new client and was to push an update for their existing mobile application. I had none of the iOS app development related applications setup and had to get this done as soon as possible. The mobile app is the website packaged as a Cordova application.

    In this post, we will see how we can use Google Chrome Browser to take screenshots in the different resolution that the App Store requires for app submission.

    Simulate Mobile Devices in Chrome

    Use the Chrome DevTools to simulate mobile devices and load the website in mobile view. In this mode, the devices drop-down list shows all the predefined list of mobile device lists. You can switch between them to see how the site will render on various devices.

    Only a subset of available devices is shown in the drop-down. All available devices are listed on selecting the Edit button. Only the devices that are ticked in the list shown are visible in the drop-down. One can edit this to add/remove devices from the drop-down list.

    Add Custom Device

    The iPhone XS Max is a relatively new device and the device setting is not yet available in the predefined list of devices. This could very well be available at the time of reading; however, there can be another device that you are looking for that that is not in the list. In this case, you can add the device to the list using the ‘Add custom device’ button in the Edit screen that lists all the devices (shown above).

    iPhone XS Max has a screen size of 1242px x 2688px. Using the actual pixels might render the page size large for your laptop/monitor. You can reduce the size by a factor, Device Pixel Ratio (DPR), and enter that along with the device details. In the example below I have used a DPR of 3 which makes the width the height smaller - 414px x 896px (12423 = 414px; 26883 = 896px)

    Capture Screenshot in Native Resolution

    To upload the screenshots to the App Store you need them to be in the native resolution, as you would have taken a screenshot in the actual devices. Since the page rendered on the Mobile Device layout is of a different size you cannot simply take a screen capture of the rendered page, since that will be in a different resolution. To capture a screenshot in the native resolution there are two options

    From the options menu drop down (that can be accessed from the three vertical dots button) as shown below you can use the ‘Capture Screenshot’ menu item.

    Screenshot option is also available in the Command Menu, that is accessible via using the menu option in the DevTools or Control+Shift+P keyboard shortcut. The list of available commands can be filtered and choose the ‘Capture Screenshot’ command to take a screenshot in native resolution.

    The generated screenshots using any of the above methods will be in the actual device resolution, in this case, 1242px x 2688px. The screenshots can be uploaded as is to the App Store and submitted for review.

    Hope this helps you to generate screenshots for your mobile applications built using Cordova.

    Code Signing MSI Installer and DLLs in Azure DevOps

    Code Sign using Microsoft SignTool in Azure Devops build pipeline.

    Code Signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed.

    Code Signing is something that you need to consider if you are distributing installable software packages that are consumed by your consumers. Some of the most obvious examples would be any of the standalone applications that you install on your machine. Code signing provides authenticity of the software package distributed and ensures that it is not tampered with between when it was created and delivered to you. Code Signing usually involves using a Code Signing Certificate which follows the public/private key pair to sign and verify. You can purchase this from all certificate issuing authorities; google should help you choose one.

    At one of the recent projects, I had to set up Code Signing for the MSI installer for a windows service. The following artifacts were to be signed: Windows Service Executable, - Dependent DLLs, MSI Installer (that packages the above). We were using the Azure Devops for our build process and wanted to automate the code signing. We are using WIX to generate the MSI installer; Check out my previous post on Building Windows Service Installer on Azure Devops, if you want more details on setting up WIX installer in DevOps.

    Since we need to sign the executable, the DLLs and also the installer that packages them, I do this in a two-step process in the build pipeline - first sign the DLLs and executable, after which the installer project is built and signed. It ensures the artifacts included in the installer are also signed. We self host our build agent, so I installed our Code Signing certificate on the machine manually and added the certificate thumbprint as a build variable in DevOps. For a hosted agent, you can upload the certificate as a Secure File and use it from there.

    Sign DLLs

    The pipeline first builds the whole project to generate all the DLLs and the service executable. Microsoft’s SignTool is used to sign the DLLs and executable. The tool takes in the certificate’s thumbprint as a parameter and also takes in a few other parameters; check the documentation to see what each what parameter does. It does accept wildcards for the files to be signed. If you follow a convention for project/DLL names (which you should), then signing them all can be done in one command.

    c:\cert\signtool.exe sign /tr 
        /fd sha256 /td sha256 /sm /sha1 "$(CodeSignCertificateThumbprint)" 
        /d "My Project description"  MyProject.*.dll

    Code Signing Azure DevOps

    Sign Installer

    Now that we have the DLLs and the executable signed, we need to package them in using the WIX project. By default, building a WIX project rebuilds all the dependent assemblies, which will overwrite the above-signed DLLs. To avoid this make sure to pass in parameters to the build command to not build project references (/p:BuildProjectReferences=false), and only package them. The MSI installer on build output can then be signed using the same tool.

    Code Signing Azure DevOps

    Sign Powershell Scripts

    We also had a few Powershell scripts that were packaged along in a separate application. To sign them you can use the Set-AuthenticodeSignature cmdlet. All you need is to get the certificate from the appropriate store and pass it on to the cmdlet along with the files that need to be signed.

    $cert = Get-Childitem cert:\LocalMachine\My   `
        | where {$_.Thumbprint -eq "$(CodeSignCertificateThumbprint)"}[0]
    Set-AuthenticodeSignature -TimestampServer "" `
        .\Deploy\*.ps1 $cert 

    If you are distributing packaged software to your end users to install it is generally a good idea to Code Sign your artifacts and also publish the verifiable hash on your website along with the downloadables.

    I hope this helps!

    Setting Up Dual 4K Monitors - Dell P2715Q and Dell U2718Q

    Setting up two 4k monitors for home office.

    Dual 4k

    Last week I added another 27’ 4k monitor to my home office. It’s been a while since I have been thinking of having an extra monitor just for the fun of it and seeing if it has any benefits. I had a Dell P2715Q for over a year and was looking to get another one of the same models. However, looks like Dell has stopped this model and was not available at any place in Australia. The recommended alternative is Dell U2718Q, which is a 27’ 4k but with thinner bezels. I also like the color and appearance of the new monitor.

    I have monitors in the below order with my laptop on the left (which I mostly leave closed for the moment), the new Dell U2718Q in the center (main working display) and the U2718Q on the right (slightly angled in).

    Dual monitor layout

    Surface Pro and Dual 4k

    My daily work machine is a Surface Pro i7 Model 1796 which has a Mini DisplayPort for outputting the video. With just one Dell monitor connecting it to the Surface was easy and worked great at 60Hz. The only way to connect an extra display off the Surface Pro is to either Daisy Chain the Monitors or have a dock/USB device with more display ports.

    The Dell P2715Q does support daisy chaining the monitor; the U2718Q does not. But since there are only two monitors we only need one of the monitors to support MST and also the graphics card on your laptop. MST needs to explicitly enabled on the monitor settings (Menu -> Display -> MST -> Primary). Check out the User’s Guide for more details, Assume this is because it puts the monitor into 30Hz as opposed to the default 60Hz. a

    Daisy Chaining is straightforward - Output from the laptop goes to the input of the first (in my case P2715Q as only that supports MST), and from the video out of that connect to input on U2718Q.

    Dell MST

    However note that with MST turned on both the primary and the secondary monitor will be set to 30Hz at 4k resolution.

    MST Modes

    Off: Default mode - 4k2k 60Hz with MST function Disabled.
    Primary: Set as primary mode at 4K2K 30Hz with MST (DP out) enabled.
    Secondary: Set as secondary mode at 4K2K 30Hz with MST (DP out) disabled.

    You can confirm the display settings from the ‘View advanced display info’ from the start menu (Windows 10). The Surface Pro runs at 60Hz while the other two monitors are running at 30Hz.

    I have come across other people mentioning they have had success connecting one 4k from the surface display port and another one from the Surface dock, both running at 60Hz. However, I am not so keen on getting a dock specific to a device.

    There are some options from Targus and a few other brands, but all are a bit costly. For a full blow list of options on connecting a Surface Pro with multiple monitors check out this blog post.

    I have been working on 30Hz for the past one week and not finding many issues with it. Primarily my work involves text-based interfaces and not much of videos or image editing. Because of this, I don’t see much difference running on a 30Hz. But given a chance, I would like to bump up the experience to work on 60Hz, but that would mean shelling out some extra dollars and getting a dual 4k dock or switching my work machine to my MacBook Pro.

    MacBook Pro (2015)

    I have a MacBook Pro (2015) model that I don’t use much these days. Even though it is a bit older, it has much more ports and connectivity than the Surface Pro. It supports two Thunderbolt 2 ports both runs at 60Hz on 4k. It also has an additional HDMI port but supports 4k only at a 30Hz.

    Mac book pro dual 4k

    If I find much trouble with 4k on 30Hz, I will consider switching over to the MacBook, rather than getting an external dock. Overall having added in an extra monitor does help a lot. The primary monitor usually has Visual Studio or Code, and the second one has Chrome, Teams, Spotify, etc. running. I might consider adding a monitor arm at some time to make arrangement cleaner, but that’s something for later!

    Custom Authorization Policy Providers in .Net Core For Checking Multiple Azure AD Security Groups

    Extending Azure AD Groups Role based access to support combinations of multiple groups to grant access.

    In the post, .Net Core Web App and Azure AD Security Groups Role based access, we saw how to use Azure AD Security Groups to provide Role Based Access for your .Net Core applications. We covered only cases where our Controllers/functions were provided access based on a single Azure AD Security group. At times you might want to extend this to include multiple groups, e.g., A user can edit an order if they belong to Admin OR Manager group or both Admin ANDManager group. In this post, we will see how to achieve that.

    Belongs to Multiple Groups

    In the previous post, we added different policy per AD Security Group and used that in the Authorize attribute to restrict access to a particular Security Group, say Admin. If you want to limit functionality to users who belong to both Admin AND Manager, you can use two attributes one after the other as shown below.

    [Authorize(Policy = "Admin")]
    [Authorize(Policy = "Manager")]
    public partial class AddUsersController : ControllerBase

    The above code looks for policies named ‘Admin’ and ‘Manager’, which we registered on application startup using the services.AddAuthorization call (as shown in the previous post).

    Belongs to Any One Group

    In cases where you want to restrict access to a controller/function depending on the user being part of at least one of the groups in a list of given groups, e.g., user is either Admin OR Manager. The natural tendency is to use a comma-separated list of values for the Policy as shown below.

    [Authorize(Policy = "Admin,Manager")]
    public partial class AddUsersController : ControllerBase

    The above code looks for a policy named ‘Admin,Manager’ and for it to work you need to add in a policy named the same.

        policy =>
            policy.AddRequirements(new IsMemberOfAnyGroupRequirement(adminGroup, managerGroup));

    As you can see, I have modified the IsMemberOfGroupRequirement class from the previous blog post to IsMemberOfAnyGroupRequirement, which now takes in a list of AzureAdGroupConfig. The handler for the requirement (IsMemberOfAnyGroupHandler) is updated to check if the user claims have at least one of the required claim.

    public class AzureAdGroupConfig
        public string GroupName { get; set; }
        public string GroupId { get; set; }
    public class IsMemberOfAnyGroupRequirement : IAuthorizationRequirement
        public AzureAdGroupConfig[] AzureAdGroupConfigs { get; set; }
        public IsMemberOfAnyGroupRequirement(params AzureAdGroupConfig[] groupConfigs)
            AzureAdGroupConfigs = groupConfigs;
    public class IsMemberOfAnyGroupHandler : AuthorizationHandler<IsMemberOfAnyGroupRequirement>
        protected override Task HandleRequirementAsync(
            AuthorizationHandlerContext context, IsMemberOfAnyGroupRequirement requirement)
            foreach (var adGroupConfig in requirement.AzureAdGroupConfigs)
                var groupClaim = context.User.Claims
                .FirstOrDefault(claim => claim.Type == "groups" &&
                if (groupClaim != null)
            return Task.CompletedTask;

    The code now works fine, and users belonging to either Admin OR Manager can now access the AddUsersController functionality.

    Custom Authorization Policy Providers

    Even though the above code works fine, we had to add a policy specific to ‘Admin,Manager’ combination. These combinations can soon start to grow in a large application and become hard to maintain. You can add a custom authorization attribute along with customizing the policy retrieval to match our needs.

    IsMemberOFANyGroupAttribute is a custom authorize attribute that takes in a list of group names and concatenates the names using a known prefix and a separator. The known prefix, POLICY_PREFIX helps us to identify the kind of policy that we are looking at, given just the policy name.

    Make sure to choose a SEPARATOR that you know will not be there in your group names.

    public class IsMemberOfAnyGroupAttribute : AuthorizeAttribute
        public const string POLICY_PREFIX = "IsMemberOfAnyGroup";
        public const string SEPARATOR = "_";
        private string[] _groups;
        public IsMemberOfAnyGroupAttribute(params string [] groups)
            _groups = groups;
            var groupsName = string.Join(SEPARATOR, groups);
            Policy = $"{POLICY_PREFIX}{groupsName}";

    To create a custom policy provider inherit from IAuthorizationPolicyProvider or the default implementation available, DefaultAuthorizationPolicyProvider. Since I wanted to fall back to the default policies available first and then provide custom policies, I am inheriting from DefaultAuthorizationPolicyProvider. The only parameter available to us is the policyName, which is used to resolve the appropriate policy. The code first checks for any default policies available, those which are explicitly registered. I assume any policies referred to by the default Authorize attribute will have an associated Policy registered explicitly. For policies with the known prefix, POLICY_PREFIX we extract out the group names and build a new IsMemberOfAnyGroupRequirement dynamically.

    public class ADGroupsPolicyProvider : DefaultAuthorizationPolicyProvider
        private List<AzureAdGroupConfig> _adGroupConfigs;
        public ADGroupsPolicyProvider(
            IOptions<AuthorizationOptions> options,
            List<AzureAdGroupConfig> adGroupConfigs): base(options)
            _adGroupConfigs = adGroupConfigs;
        public override async Task<AuthorizationPolicy> GetPolicyAsync(string policyName)
            var policy = await base.GetPolicyAsync(policyName);
            if (policy == null &&
                var groups = policyName
                    .Replace(IsMemberOfAnyGroupAttribute.POLICY_PREFIX, string.Empty)
                        new string[] { IsMemberOfAnyGroupAttribute.SEPARATOR },
                var groupConfigs = (from groupName in groups
                                      join groupConfig in _adGroupConfigs
                                      on groupName equals groupConfig.GroupName
                                      select groupConfig)
               policy = new AuthorizationPolicyBuilder()
                    .AddRequirements(new IsMemberOfAnyGroupRequirement(groupConfigs))
            return policy;

    Don’t forget to register the new policy provider at Startup.

    services.AddSingleton<IAuthorizationPolicyProvider, ADGroupsPolicyProvider>();

    Using the new attribute is the same as before. However, you don’t need to register a policy for ‘Admin,Manager’ explicitly. When the default policy provider cannot find a policy, it will return a dynamic policy with IsMemberOfAnyGroupRequirement having the above two groups. For any of the possible group combinations, this will happen automatically now.

    [IsMemberOfAnyGroupAttribute("Admin", "Manager")]
    public partial class AddUsersController : ControllerBase

    Hope this helps you extend the Policy-based authorization in ASP.Net Core applications and mix and match with the way you want to enable access for your users.

    I have been on and off Pomodoro technique and always wanted to be more consistent following it. A while back I was using Tomighty, a minimalistic Pomodoro timer. However, with Tomighty, I often forget to start the timer and soon stopped using it altogether. Recently when reading through a Productivity tips article I came across Toggl.

    Toggl is the leading time tracking app for agencies, teams and small businesses. A simple time tracker with powerful reports and cross-platform functionalities.


    Pomodoro Tracker

    Even though I am interested in tracking time, I am not so keen on the reports and cross-platform functionalities that Toggl provides. Especially after trying to Minimalize my Online Life I am very particular about adding a new app. The one feature that I am interested in with Toggl is the Pomodoro tracker that comes up with the desktop app. Toggl has a mini timer that can float around anywhere in your desktop and is very minimalistic. All it has is a task name with a Start/Stop button and displaying the elapsed time. Within the application settings, you can configure the Pomodoro interval and the break length. The timer stops automatically after the set duration of Pomodoro interval.


    Another feature with Toggl that I like is about having a reminder to remind you to track time/Pomodoro. You can choose the days/time and interval for the reminder. Any time you are not tracking time a windows notification reminds you to track time. This reminder should help to stick more with following Pomodoro.

    Toggle Reminder

    Toggl does not have any built-in option to start up when Windows start, but it’s easy enough to add the shortcut to Toggl in the Windows startup folder.

    Happy tracking!

    Building Windows Service Installer on Azure Devops

    Continuosly building a windows installer on Azure DevOps using VdProj or WIX

    Recently I was looking into packaging a Windows Service as an MSI installer. I wanted the MSI created in the build pipeline, in this case Azure DevOps, and publish the MSI as a build artifact. The windows service uses .Net Framework and looking around for installer options I found mainly two approaches discussed below.

    Visual Studio Installer Projects (*.VdProj)

    Microsoft Visual Studio Installer Projects is available as an extension to Visual Studio and provides support for Visual Studio Installer Projects in Visual Studio. By adding this setup project to the solution, you can create a setup file that steps through a wizard and installs your application. If you are looking to how to set up the Installer project, this stackoverflow answer shows you exactly how. Once you have the installer project set up locally and have the MSI file generated on building solution, we can set this up in Azure DevOps pipeline and automate it.

    The Visual Studio Installer Projects require a custom build agent.

    The only way I could find to get the Installer Project to run and build out an MSI file was to set up a custom build agent. Hosted agents do not support this at the moment. I set the custom agent on a Windows machine and have not tried on any of the other variants. The only tricky thing with setting up the custom agent was step 4 under Prepare Permissions. To find the scope ‘Agent Pools (read, manage)’, make sure you click the ‘Show all/less scopes’ link towards the bottom of the page (as shown in the image below) - At times some things just miss your eyes! Rest was pretty straightforward, and you can have the custom build agent set up in minutes.

    Azure DevOps - Custom Agent token setup

    In your build pipeline definition make sure to select the new custom agent as your default Agent pool. The Build VS Installer is a custom task that can be used to build your Visual Studio Installer projects (.vdproj files). Since MSBuild cannot be used to build these projects you need to make sure you have Visual Studio installed on the agent with the Installer Projects extension installed. Setting up the custom task is straightforward - you can either choose to build just one particular installer-project in the solution or all of them in the solution.

    Azure Devops - Build Pipeline

    I ran into the error message An error occurred while validating. HRESULT = ‘8000000A’, when running this build through the pipeline. Soon figured out that this was faced by others in the past. Running the DisableOutOfProcBuild.exe solved the issue. To do this in the pipeline add a command line task (Set EnableOutOfProcBuild step in the image above) and use the scripts based on the appropriate VS version.

    Make sure to either select the ‘Create artifact for .msi’ option in the custom build task or manually copy it out to the artifacts directory. The build now generates an MSI every time!


    WiX is an open source project that provides a set of tools that build Windows Installation Packages. The installer packages are XML based, and the learning curve is relatively steep. However, it offers a lot more features and capabilities over the Visual Studio installer project we saw above. Microsoft hosted agents support building WIX projects, and I was able to successfully run them on the Hosted VS2017 agent.

    WIX projects can run on the Hosted VS2017 agent. Just this one reason makes WIX a far better choice over VdProj if you are starting fresh.

    Azure Devops WIX

    If you are running on a custom build agent, you will have to install the Wix toolset for everything to work. The default build task in Azure DevOps is all that is required to build the project as WIX integrates well with MSBuild. As you can see WIX it is easier setup and lesser hassles, so definitely recommend using that path if you are not already with VdProj.

    Hope this helps you set up building installer projects on Azure DevOps.

    Brisbane has a lot of places to go around especially those you can cover in a day. We usually prefer starting in the morning at about 8 am and reach back by around 2-3pm. Most of the places we carried food and do kind of a small picnic and my son, Gautham enjoys it a lot as we do. This is all possible because of my wife, Parvathy and special kudos to her culinary skills.


    Lake Moogerah

    Lake Moogerah makes an excellent place for a day trip or even camp overnight with its scenic beauty and activities around. You can go boating, take a stroll over the Moogerah Dam Wall or hike up the mountains for a great view. This place has got everything in one spot and makes a perfect place for the entire family.

    Lake Moogerah

    Venman Bushland

    Venman Bushland National Park is still one of my favorites walks around Brisbane. The park is also home to a lot of wildlife, and you might be lucky to see some if you keep your eyes open. We spotted a wallaby towards the end of the walk.

    Venman Bushland National Park

    Gold Coast and Sunshine Coast

    Gold Coast needs no introduction. If you are in Brisbane, there is every chance that you have already heard and been there. There is something here for everyone. Beaches, surfing, theme parks are to name a few. There are around five theme parks which in itself takes a day each. Getting a yearly pass helps and you can go back as many times as you want. Sealife and Wet’n’Wild are the ones that we go often.

    Gold Coast

    Head over to the opposite direction of Gold Coast, and you can reach Sunshine Coast which also has a lot to offer for a day tripper. The Sealife is the right place for a day trip, and it has a Octonauts zone which is one of Gautham’s favorite.

    Noosa Heads

    Surrounded by beach, river, hinterland and national parks, Noosa provides a wide range of activities and adventures. Check out the Noosa Markets if you are there on a Sunday.

    Noosa Heads

    Tin Can Bay

    If you fancy feeding wild dolphins, Tin Can Bay is the place to do that. It does get a bit crowded (even on a weekday), but everybody gets a chance. You might have to start early if you want to make it by 7 in the morning at the center. We stayed there overnight and clubbed Noosa Heads on the way. On the way back we went to the Rainbow beach, where you can drive along the beach if you are interested.

    Tin Can Bay

    Great White Rock

    At the Great White Rock, you can enjoy a wide range of activities including hiking, bird-watching, horse riding, mountain bike riding etc to name a few. There are multiple hiking trails and makes it perfect for all ages.

    Great White Rock

    Mt Coot-tha

    Located close to the city, Mount Coot-tha has a lot to provide. Don’t miss the scenic views from the lookout especially great during sunrise and sunset. There are also multiple bushwalking trails including a kids trail.. There is a Planetarium located in the Brisbane Botanic Gardens which has various shows and activities. The lookout is also a good ride up from the city if you are into cycling.

    Mount Coot Tha

    Glasshouse Mountains

    Glass House Mountains are remnants of volcanic activity and these volcanic mountains is a perfect day trip location. Good trails and lookouts along the way makes the drive there an enjoyable one as well.

    Glasshouse Mountains


    The Tamborine mountains has a lot to offer and will make you come back for more. Lots of different trails, Skywalk, Glow Worm Caves, waterfalls are just a few. The glow worm caves is a unique experience and worth visiting and helping them serve a cause of protecting the species.



    Standing on top of an ancient volcano, Springbrook is just an hour drive from Brisbane and has views that stretch forever. You can see some of the oldest trees in Australia, cooling swimming holes and walking trails. Don’t miss out on the Natural Bridge, a picturesque rock formation, formed naturally by the waterfall over the basalt cave.


    Nerima Gardens

    Nerima Gardens are the Japanese Gardens of Ipswich and makes it an excellent getaway for the family. Right next to the gardens is the Ipswich Nature Center which houses a variety of animals and birds. Admission is free to these parks which makes it even better (however hey really appreciate some donations)

    Nerima Gardens

    Mt Nebo and Mt Glorious

    Mount Glorious and Mount Nebo and know for their bushwalking trails. The mountains are next to each other however best enjoyed over multiple days. There are tracks of varying levels making it suitable for people of all fitness levels.

    Mount Glorious and Mount Nebo

    Eat Street

    With great city and river views and open only from Friday to Sunday, the Eat Street is a unique experience that you can get in Brisbane. Lots of food options and entertainments make this a lively place. Make sure to check out their site for special events, jumping castles, etc to keep the little ones in the family busy. There is a small entry fee, but make sure you get stamped if you are going out and want to return the same day.

    Eat Street

    Carseldine Markets

    The Carseldine Farmers and Artisan Markets is a great way to spend your Saturday morning checking out the local produce, arts, and crafts along with some good food and coffee. There is no entry fee and has lots of parking as well. There are a lot of other similar markets as well around Brisbane. A quick googling should help you find out the ones nearer to you.

    Carseldine Markets

    Always make sure to check out the place details and general tips before heading out especially if you are hiking. Carry enough water, sunscreen, insect repellants, etc. I take along the Camelbak Octane XCT on such trips which hold enough water for three of us. Check out the hiking checklist for more detailed instructions.

    Enjoy your weekends and sound off in the comments what other places you recommend checking out in and around Brisbane.

    Over the last week, I have been reading the book Digital Minimalism by Cal Newport. The central idea of the book is about being aware of the various technologies affecting our lives and making a conscious effort to choose those are required and adds value to your life.

    Minimalism is the art of knowing how much is just enough. Digital Minimalism applies this idea to our personal technology. It’s the key to living a focused life in an increasingly noisy world. - Cal Newport

    State of My Online Life

    Before starting with the book, I have to admit that my online life was not that ordered and thought through. However I was aware of the social media application’s taking a significant part of my time, casually browsing without giving any value in return. I had intentionally stopped using Facebook for a couple of months, and now I am completely off it. Instagram was soon to follow except for some occasional posting. I had uninstalled both apps from my phone as it was the main access points to these sites. I had intentionally turned off all notifications on the phone for a long time and found it really helpful.

    However, what I was not aware of was with these two applications gone I soon started relying on other apps to fill in its place. I am into running and found myself spending more time on Strava. For the social part, I started being more into WhatsApp, YouTube, LinkedIn, and Twitter. When nothing was there, I was hanging on to the email applications pulling to see if anything interesting came in (as if I was expecting a million dollars email). Only after starting with the book did I start realizing that these apps had taken over the ones that I had given up.

    Digital Declutter

    The title of the book, ‘Digital Minimalism’, immediately caught my attention when I first saw it and I was keen to read it. Primarily I wanted to reduce my phone usage as most of my time was going away there. Going through the digital declutter phase, I took note of all the technologies and application that are on my phone. The Screen Time feature on iPhone (you can use Digital Wellbeing if you are on an Android or install RescueTime application) started making me more aware of the time that I have been spending on phone and the apps that took most of my time.

    Screen Time Report on Iphone

    I realized that a majority of my time was spent on WhatsApp especially on Group Chats and scrolling through all the forwarded messages/videos in them and always checking back for more content. Even though I have a Kindle, I was reading more on the Kindle app on phone. Most of the time when reading I would be distracted by something else and wander off to a different app. Even though I had notification turned off new features like Badges took its place which started pulling me again into the apps.

    After noting done all the apps and analyzing them, I started decluttering my phone.

    • Exited all Whatsapp Group Chats
    • Removed the below apps
      • Emails (Gmail and Outlook)
      • Slack
      • Strava
      • Yammer
      • LinkedIn
      • Kindle
    • Disabled Badges notification
    • Disabled (Raise to Wake)[]: This is one of the features that lure you to look into the phone even if you did not intend to do.
    • Microsoft Teams: Initially I had removed Teams, but realized that was the only way to communicate with my Readify team members quickly. So decided to install it back.

    By deleting the Kindle app, I am looking to force myself not to use the phone for reading books. I am keen to try out the idea of reading physical books and take notes while reading, an idea that struck me while reading the Bullet Journal blog (This is also where I came across the book ‘Digital Minimalism’ the first time.). I was in India recently and took advantage of getting books for a much lower price and gifted myself some self-help books and a few others.


    Interestingly I also came across Bullet Journaling at the same time, which aligns with Digital Minimalism as it forces you to sync to paper your ideas, thoughts, and to-dos as opposed to a digital system. I have started trying this out alongside and find it helpful — more on it in a different blog post.

    Bullet Journal

    Digital Minimalism Is An Ongoing Process

    Digital Minimalism is not a one-time activity, but something to perform on an ongoing basis and any time you think of adding a new technology into your life. It has just been over a week that I have started decluttering my online life, and I am already finding benefits. I pick up my phone less often and have lesser a mental load to keep track off.

    Phone Pickups after Decluttering

    I plan to do the same decluttering process with my laptop once I get into the flow of the process. Decluttering is a great way to bring in more focus in your life and gives you a lot more time than you previously had. How decluttered is your online life?