Tip of the Week: Squoosh - Make Images Smaller

Compress the images that you share online.

A while back I wrote about PNGGauntlet, a windows application that allows reducing image size if they are in PNG format. If you are looking for something that can handle any image format and available through the browser then check out Squoosh.

Squoosh is an image compression web app that allows you to dive into the advanced options provided by various image compressors.

Squoosh offers multiple compression options and defaults to MozJPEG. It also gives some advanced settings that you can use to play around to find one that best suite for your needs. The default compression settings itself provides a huge benefit (69% reduction in size) as in the image above. I use this primarily for the images that I share on this blog.

Squoosh your images!

Windows Service Using Topshelf, Quartz and Autofac

Walkthrough of setting up a recurrent job scheduler.

Whenever there is a need for some automated jobs that have to run On-Prem for a client, my default choice has been to use Windows Service along with Quartz. Last year I had blogged about one such instance. However, I did not get into the detail of setting up the project and associated dependencies to run the service.

In this post, I will walk through how I got about setting up these recurrent job scheduler to make it easy for me or anyone else who runs into a similar situation.

Topshelf

Topshelf makes the creation of windows services easy by giving the ability to run it as a console application while developing it and easily deploy it as a service. Setting up Topshelf is straight forward - All you need is a console application (targetting the .Net Framework) and add reference to Topshelf Nuget package. To set up the service, you need to modify the Program.cs with some setup code for creating the windows services and setting some service metadata.

Autofac

With Topshelf setup, we have a running windows service application. To add in dependency injection so that you do not have to wire up all your dependencies manually, you can use Autofac. The Topshelf.Autofac library helps integrate Topshelf and Autofac DI container. Autofac can be easily integrated with Topshelf using the library and passing in the container instance to the UseAutofacContainer extension method on HostConfigurator.

var container = Bootstrapper.BuildContainer();

var rc = HostFactory.Run(x =>
{
    x.UseAutofacContainer(container);
    
    x.Service<SchedulerService>(s =>
    {
        s.ConstructUsingAutofacContainer();
        s.WhenStarted(tc => tc.Start(config));
        s.WhenStopped(tc => tc.Stop());
    });

});

Quartz.Net

The SchedulerService will now be instantiated using the Autofac container and makes it easy to inject dependencies into it. We need to be able to schedule jobs within the SchedulerService hence inject an IScheduler from Quartz.Net. You can add a reference to Quartz Nuget package, and you are all set to run jobs on schedule. To integrate Quartz with Autofac so that job dependencies can also be injected in via the container we need to use Autofac.Extras.Quartz Nuget.

Wiring it Up

Below is a sample setup of the Autofac container that registers jobs (MySyncJob) in assembly and adds scheduler instance to the container (using the QuartzAutofacFactoryModule). The IDbConnection is registered to match the lifetime scope of quartz job so that each job instance gets a different instance.

public static class Bootstrapper
{
    public static IContainer BuildContainer()
    {
        var builder = new ContainerBuilder();
        builder.RegisterType<SchedulerService>();

        var schedulerConfig = new NameValueCollection
        {
            { "quartz.scheduler.instanceName", "MyScheduler" },
            { "quartz.jobStore.type", "Quartz.Simpl.RAMJobStore, Quartz" },
            { "quartz.threadPool.threadCount", "3" }
        };

        builder.RegisterModule(new QuartzAutofacFactoryModule
        {
            ConfigurationProvider = c => schedulerConfig
        });

        builder.RegisterModule(new QuartzAutofacJobsModule(typeof(MySyncJob).Assembly));

        var connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;
        builder
           .RegisterType<SqlConnection>()
           .WithParameter("connectionString", connectionString)
           .As<IDbConnection>()
           .InstancePerMatchingLifetimeScope(QuartzAutofacFactoryModule.LifetimeScopeName);
       
        // Other registrations

        var container = builder.Build();
        return container;
    }
}

The SchedulerService class is used to start the scheduler jobs when the service starts up and shut down the jobs when service is shut down. As you can see the IScheduler instance is constructor injected using Autofac. On start, add the jobs to the scheduler(i am using a cron schedule in the example below).

public class SchedulerService
{
    private readonly IScheduler _scheduler;

    public SchedulerService(IScheduler scheduler)
    {
        _scheduler = scheduler;
    }

    public void Start(ScheduleConfig config)
    {
        ScheduleJobs(config);
        _scheduler.Start().ConfigureAwait(false).GetAwaiter().GetResult();
    }

    private void ScheduleJobs(ScheduleConfig config)
    {
        ScheduleJobWithCronSchedule<MySyncJob>(config.MySyncJobSchedule);
        ScheduleJobWithCronSchedule<MyOtherSyncJob>(config.MyOtherSyncJobSchedule);
    }

    private void ScheduleJobWithCronSchedule<T>(string cronShedule) where T : IJob
    {
        var jobName = typeof(T).Name;
        var job = JobBuilder
            .Create<T>()
            .WithIdentity(jobName, $"{jobName}-Group")
            .Build();

        var cronTrigger = TriggerBuilder
            .Create()
            .WithIdentity($"{jobName}-Trigger")
            .StartNow()
            .WithCronSchedule(cronShedule)
            .ForJob(job)
            .Build();

        _scheduler.ScheduleJob(cronTrigger);
    }

    public void Stop()
    {
        _scheduler.Shutdown().ConfigureAwait(false).GetAwaiter().GetResult();
    }
}

The Sync jobs have its dependencies which are again injected using Autofac container. Adding in new jobs is easy, and all we need to make sure is that it gets set up with the appropriate schedule and register its dependencies in the container.

Hope this helps you with setting up recurring scheduler jobs for on-prem scenarios.

Migrating Octopress To Hugo

Migrated my blog again - Here's how I went about doing it.

I have been on Octopress blogging platform for around 5 years and was fairly happy with it. I had optimized Octopress workflow for new posts, set it up for continuous delivery and also enabled scheduling posts in the future.

I have been wanting to migrate off Octopress since a while (reasons below) but have been putting it off since I did not want to go through another migration pain. Now that it is all done, the migration was not as hard as I thought. In this post I will walk through the reasons of migrating away from Octopress, the actual migration steps involved and tweaking the default Hugo settings/theme and workflow to get what I wanted.

Reasons To Migrate

  • No Longer Maintained: Octopress is no longer maintained by anyone and it’s hard to keep up with all the dependent library updates and ruby version changes. I have my builds breaking randomly for dependent package updates and it was not something I liked dealing with.

  • Terribly Slow: To build by full site Octopress takes around 2 minutes. Since I have modified my workflow to build only the draft posts when in local I usually don’t have to wait that long, but still it’s slow.

These two reasons were pressing enough to migrate off Octopress. Hugo was the natural choice for its speed and community and is the next highest rated after Jekyll(/Octopress). I also chose to migrate away from Azure Hosting and use Netlify to host this blog.

Why Netlify? With Octopress I had my build pipeline push the generated site contents back into GitHub (to a separate branch) and then have Azure deploy that branch automatically using Github trigger. If I were to remain on Azure, I would have to do almost the same. Netlify comes with a hugo template. The template is automatically detected when pointed to your repository and sets up all that is required to deploy the generated static content. All I had to update was to set the correct Build Environment Variable for HUGO_VERSION.

Netlify can host your Hugo site with CDN, continuous deployment, 1-click HTTPS, an admin GUI, and its own CLI.

I moved this site over to HTTPS a while back and had been using Cloudflare’s Shared SSL. Moving over to the free Let’s Encrypt certificate required additional setup on Azure. Netlify takes out all this complexity and handles this all for you in the background. Once you set up a custom domain, it’s provisioned with a Let’s Encrypt certificate.

Migration

The actual migration of the content was mostly related to moving all the files and fixing up some code blocks.

  • Move Files Moving files was easy, as both platforms support Markdown. Everything from your source folder maps into the content folder in Hugo. Pages that were in folders in Octopress are now Markdown files with the appropriate name in Hugo. The actual posts in Markdown had the date appended in them. Using a PowerShell script, I stripped off the first 11 characters (YYYY-MM-DD-) and moved them into the blog folder (since all my blog posts are under the /blog URL path). All my images live under the static folder. Octopress to Hugo - Files

  • Fixing Code Blocks Octopress supported adding a custom title on the code block which is not available out of the box in Hugo. You can use a custom shortcode to set this up. However, I chose to remove them as there were only a few code blocks that had them. Using regex search in VSCode, it’s easy to get rid of them at once.

Configuring Hugo

With all the content ported over all that was left was to select a theme a customize it, make sure all existing URLs work on the new site, configure search and a few other things.

After a bit of hunting around for themes, I decided to go with Minimo for its simplicity and supporting most of the configurations of Hugo. All my theme overrides are in the layouts folder. I have added in support for showing a paged list of all my blog posts and a few layout changes for list views and headers. Added in a google custom search engine support as a widgetand added it to the sidebar. Set up the 404 custom page to enable searching site for content.

The site is now running on Hugo + Netlify. Building the whole site (not just the drafts) takes around 3-4 seconds (cold build) and on every file change after the build watcher is running takes around 200 milliseconds. Hugo is blazing fast. If you face any issues, have any feedback kindly drop a comment or send me a tweet.

2018: What Went Well, What Didn't and Goals

A short recap of the year that is gone by and looking forward!

Posts per month - 2016

Another year has gone by so fast, and it is again time to do a year review.

TLDR;

2018 saw most of my time in running, cycling and learning to swim. Regular exercise and healthy eating helped maintain my weight. Lots of travel added to the excitement. Blogging, Reading, Photography and Open source took the back seat and did not go as planned. Looking forward to 2019!

What went well

Running, Cycling and Learning to Swim

As planned last year I did two half marathon events - Brisbane Great South Run and Springfield Half Marathon. I could also do a Sub 50 10k run after many tries. Even though running a marathon was part of one of the goals it did not happen. Cycling was on and off, and towards the year end, I started commuting to work (around 10k one way). However, it lasted only around 2 months as I changed client, for which I had to commute to Gold Coast (around 70km, once a week). With swimming, I still struggle to do more than two laps at a stretch. The goal was to swim 1km by the end of 2018 but looks like that’s a long way to go. I was often lazy and not motivated to go out to the pool to practice.

I can spend all this time on these activities because of the full support from my wife Parvathy.

Year in Sport

Travel

We covered a couple of major tourist destinations around Australia - Great Barrier Reef (Cairns), Gold Coast, Frazer Island and Tasmania being the top highlights. We also did a lot of hikes and short trips around Brisbane including Tin Can Bay and Ballina. Snorkeling in the Great Barrier Reef, Penguin Tour in Tasmania, Whale watching, beach highway drive, and 7 seater beach flight in Frazer Island, dolphin feeding at Tin Can Bay, Croc attack show at Hartley’s Crocodile Park were some of the new experiences in life and enjoyed it a lot.

Community

I did two public talks, one at the .Net User Group on Azure Key Vault and one at Readify Back2Base on Ok I Have Got HTTPS! What Next?. Two is still a smaller number and is something I want to do more of in the coming years.

Talk at .Net User Group

I created a youtube pip video on Key Vault Connected Service, and wanted to do couple more of such videos but did not get prioritized enough over the other things. The significant open source contribution was to KeyVault configuration builder that is part of the ASP.Net framework.

What didn’t go well

  • Blogging Blogging took a back seat this year, even though I had set out to do 4 posts a month. I did a total of 29 posts that’s roughly over 2 posts a month. Some months I missed out on it completely.

Posts per month in the year 2018

  • Reading 20 books was the goal for this year however ended up reading only 6 books. 8020 Running helped me a lot with my running.

  • Open Source I wanted to involve with one Open Source project actively, but the only thing that happened was some minor contributions.

  • Learning Most of the learning was at work, and I did not actively take up learning anything new. I did start with Category Theory and FSharp again, but it did not stick for a long time.

Goals for 2019

  • Reading Read 10 books

  • Blogging 2 posts a month and stay consistent all months

  • Tri Sports Complete a Marathon, couple of half marathon event, at least one cycling event and 500m swim.

  • Learning Become more proficient with JavaScript. Build Azure Key Vault Explorer while learning.

I have been on and off my planning and routines. Looking to get back with this and organize my day-to-day activities a bit more and be more accountable for the goals that I have set.

Wishing you all a Happy and Prosperous New Year!

Query Object Pattern and Entity Framework - Making Readable Queries

Using a Query Object to contain large query criteria and iterating over the query to make it more readable.

Search is a common requirement for most of the applications that we build today. Searching for data often includes multiple fields, data types, and data from multiple tables (especially when using a relational database). I was recently building a Search page which involved searching for Orders - users needed the ability to search by different criteria such as the employee who created the order, orders for a customer, orders between particular dates, order status, an address of delivery. Order criteria are optional, and they allow to narrow down on your search with additional parameters. We were building an API endpoint to query this data based on the parameters using EF Core backed by Azure SQL.

In this post, we go through the code iterations that I made to improve on the readability of the query and keep it contained in a single place. The intention is to create a Query Object like structure that contains all query logic and keep it centralized and readable.

A Query Object is an interpreter [Gang of Four], that is, a structure of objects that can form itself into a SQL query. You can create this query by referring to classes and fields rather than tables and columns. In this way, those who write the queries can do so independently of the database schema, and changes to the schema can be localized in a single place.

// Query Object capturing the Search Criteria
public class OrderSummaryQuery
{
    public int? CustomerId { get; set; }
    public DateRange DateRange { get; set; }
    public string Employee { get; set; }
    public string Address { get; set;}
    public OrderStatus OrderStatus { get; set; }
}

I have removed the final projection in all the queries below to keep the code to a minimum. We will go through all the iterations to make the code more readable, keeping the generated SQL query efficient as possible.

Iteration 1 - Crude Form

Let’s start with the crudest form of the query stating all possible combinations of the query. Since all properties are nullable, check if a value exists before using it in the query.

(from order in _context.Order
join od in _context.OrderDelivery on order.Id equals od.OrderId
join customer in _context.Customer on order.CustomerId equals customer.Id
where order.Status == OrderStatus.Quote &&
      order.Active == true &&
      (query.Employee == null || 
      (order.CreatedBy == query.Employee || customer.Employee == query.Employee)) &&
      (!query.CustomerId.HasValue ||
      customer.Id == query.CustomerId.Value) &&
      (query.DateRange == null || 
      order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate))

Iteration 2 - Separating into Multiple Lines

With all those explicit AND (&&) clauses the query is hard to understand and keep up. Splitting them into multiple where clauses make it more cleaner and keep each search criteria independent. The end SQL query that gets generated remains the same in this case.

Aesthetics of code is as important as the code you write. Aligning is an important part that contributes to the overall aesthetics of code.

from order in _context.Order
join od in _context.OrderDelivery on order.Id equals od.OrderId
join customer in _context.Customer on order.CustomerId equals customer.Id
where order.Status == orderStatus && order.Active == true
where query.Employee == null ||
      order.CreatedBy == query.Employee || customer.Employee == query.Employee
where !query.CustomerId.HasValue || customer.Id == query.CustomerId.Value
where query.DateRange == null ||
      (order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate)

Iteration 3 - Refactor to Expressions

Now that each criterion is independently visible let’s make each of the where clause more readable. Refactoring them into C# class functions makes the generated SQL inefficient, as EF cannot transform C# functions into SQL. Such conditions in a standard C# function gets evaluated on the client site, after retrieving all data from the server. Depending on the size of your data, this is something you need to be aware of.

However, if you use Expressions those get transformed to evaluate on the server. Since all of the conditions on our where clauses can be represented as an Expression, let’s move those to the Query object class as properties returning Expressions. Since we need data from multiple tables, the intermediate projection OrderSummaryQueryResult helps to work with data from the multiple tables. All our expressions take the OrderSummaryQueryResult projection and perform the appropriate conditions on them.

public class OrderSummaryQuery
{
    public Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser
    {
        get
        {
            return (a) => Employee == null ||
                      a.Order.CreatedBy == Employee || a.Customer.Employee == Employee;
        }
    }

    public Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
    public Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
    public Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
}
(from order in _context.Order
 join od in _context.OrderDelivery on order.Id equals od.OrderId
 join customer in _context.Customer on order.CustomerId equals customer.Id
 select new OrderSummaryQueryResult() 
    { Customer = customer, Order =    order, OrderDelivery = od })
.Where(query.IsActiveOrder)
.Where(query.BelongsToUser)
.Where(query.ForCustomer)
.Where(query.InDateRange)
-- Generated SQL when order status and employee name is set
SELECT [customer].[Name] AS [Customer], [order].[OrderNumber] AS [Number],
       [od].[Address], [order].[Created] AS [CreatedDate]
FROM [Order] AS [order]
INNER JOIN [OrderDelivery] AS [od] ON [order].[Id] = [od].[OrderId]
INNER JOIN [Customer] AS [customer] ON [order].[CustomerId] = [customer].[Id]
WHERE (([order].[Active] = 1) AND ([order].[Status] = @__OrderStatus_0)) AND 
      (([order].[CreatedBy] = @__employee_1) OR ([customer].[Employee] = @__employee_2))
If you use constructor initialization for intermediate projection, *OrderSummaryQueryResult* the where clauses gets executed on the client side. So use the object initializer syntax to create the intermediate projection.

Iteration 4 - Refactoring to Extension method

After the last iteration, we have a query that is easy to read and understand. We also have all queries consolidated within the query object, and it acts as a one place holding all the queries. However, something still felt not right, and I had a quick chat with my friend Bappi, and we refined it further. The above query has too many where clauses and it was just repeating for each of the filters. To encapsulate this further, I moved all the filter expressions to be returned as an Enumerable and wrote an extension method, ApplyAllFilters, to execute them all.

// Expose one property for all the filters 
public class OrderSummaryQuery
{
    public IEnumerable<Expression<Func<OrderSummaryQueryResult, bool>>> AllFilters
    {
        get
        {
            yield return IsActiveOrderStatus;
            yield return BelongsToUser;
            yield return BelongsToCustomer;
            yield return FallsInDateRange;
        }
    }

    private Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser...
    private Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
    private Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
    private Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
}

... 

// Extension Method on IQueryable
{
    public static IQueryable<T> ApplyAllFilters<T>(
        this IQueryable<T> queryable,
        IEnumerable<Expression<Func<T, bool>>> filters)
    {
        foreach (var filter in filters)
            queryable = queryable.Where(filter);

        return queryable;
    }
}
{
    (from order in _context.Order
    join od in orderDeliveries on order.Id equals od.OrderId
    join customer in _context.Customer on order.CustomerId equals customer.Id
    select new OrderSummaryQueryResult() { Customer = customer, Order = order, OrderDelivery = od })
    .ApplyAllFilters(query.AllFilters)
    
    ...
}

The search query is much more readable than what we started with in Iteration 1. One thing you should always be careful about with EF is making sure that the generated SQL is optimized and you are across what gets executed on the server and the client. Using a SQL Profiler or configure logging to see the generated SQL. You can also configure to throw an exception (in your development environment) for client evaluation.

Hope this helps to write cleaner and readable queries. Sound off in the comments if you have thoughts on refining this further or of any other patterns that you use.

Exclude Certain Scripts From Transaction When Using DbUp

Certain commands cannot run under a transaction. See how you can exclude them while still keeping your rest of the scripts under transaction.

Recently I had written about Setting Up DbUP in Azure Pipelines at one of my clients. We had all our scripts run under Transaction Per Script mode and was all working fine until we had to deploy some SQL scripts that cannot be run under a transaction. So now I have a bunch of SQL script files that can be run under a transaction and some (like the ones below - Full-Text Search) that cannot be run under a transaction. By default, if you run this using DbUp under a transaction you get the error message CREATE FULLTEXT CATALOG statement cannot be used inside a user transaction and this is an existing issue.

CREATE FULLTEXT CATALOG MyCatalog
GO

CREATE FULLTEXT INDEX 
ON  [dbo].[Products] ([Description])
KEY INDEX [PK_Products] ON MyCatalog
WITH CHANGE_TRACKING AUTO
GO

One option would be to turn off transaction all together using builder.WithoutTransaction() (default transaction setting) and everything would work as usual. But in case you want each of your scripts to be run under a transaction you can choose either of the options below.

Using Pre-Processors to Modify Script Before Execution

Script Pre-Processors are an extensibility hook into DbUp and allows you to modify a script before it gets executed. So we can wrap each SQL script with a transaction before it gets executed. In this case, you have to configure your builder to run WithoutTransaction and modify each script file before execution and explicitly wrap with a transaction if required. Writing a custom pre-processor is quickly done by implementing the IScriptPreprocessor interface, and you get the contents of the script file to modify. In this case, all I do is check whether the text contains ‘CREATE FULLTEXT’ and wrap with a transaction if it does not. You could use file-name conventions or any other rules of your choice to perform the check and conditionally wrap with a transaction.

public class ConditionallyApplyTransactionPreprocessor : IScriptPreprocessor
{
    public string Process(string contents)
    {
        if (!contents.Contains("CREATE FULLTEXT", StringComparison.InvariantCultureIgnoreCase))
        {
            var modified =
                $@"
BEGIN TRANSACTION   
BEGIN TRY
           {contents}
    COMMIT;
END TRY
BEGIN CATCH
    ROLLBACK;
    THROW;
END CATCH";

            return modified;
        }
        else
            return contents;
    }
}

Using Multiple UpgradeEngine to Deploy Scripts

If you are not particularly fine with tweaking the pre-processing step and want to use the default implementations of DbUp and still achieve keep transactions for you scripts where possible, you can use multiple upgraders to perform the job for you. Iterate over all your script files and then partition them into batches of files that need to be run under a transaction and those that can’t be run under a transaction. As shown in the image below you will end up with multiple batches with alternating sets of transaction/non-transaction set of scripts. When performing the upgrade over a batch, set the WithTransactionPerScript on the builder conditionally. If any of the batches fail, you can terminate the database upgrade.

Script file batches

{
    Func<string,bool> canRunUnderTransaction = (fileName) => !fileName.Contains("FullText");
    Func<List<string>, string, bool> belongsToCurrentBatch = (batch, file) =>
		batch != null &&
        canRunUnderTransaction(batch.First()) == canRunUnderTransaction(file);
    
    var batches = allScriptFiles.Aggregate
        (new List<List<string>>(), (current, next) =>
            {
                if (belongsToCurrentBatch(current.LastOrDefault(),next))
                    current.Last().Add(next);
                else
                    current.Add(new List<string>() { next });

                return current;
            });

    foreach (var batch in batches)
    {
        var includeTransaction = !batch.Any(canRunUnderTransaction);

        var result = PerformUpgrade(batch.ToSqlScriptArray(), includeTransaction);

        if (!result.Successful)
        {
            Console.ForegroundColor = ConsoleColor.Red;
            Console.WriteLine(result.Error);
            Console.ResetColor();
            return -1;
        }
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();
    return 0;
}

private static DatabaseUpgradeResult PerformUpgrade(
    SqlScript[] scripts,
    bool includeTransaction)
{
    var builder = DeployChanges.To
        .SqlDatabase(connectionString)
        .WithScripts(scripts)
        .LogToConsole();

    if (includeTransaction)
        builder = builder.WithTransactionPerScript();

      var upgrader = builder.Build();

    var result = upgrader.PerformUpgrade();

    return result;
}

Keeping all your scripts in a single place and automating it through the build-release pipeline is something you need to strive for. Hope this helps you to continue using DbUp even if you want to execute scripts that are a mix of transactional and non-transactional.

.Net Core Web App and Azure AD Groups Role based access

Use Azure AD groups to enable/disable functionality for your users based on their Roles.

Getting your application to provide capabilities based on the role of the User using the system is a common thing. When using Azure Active Directory (AD), the Groups feature allows organizing users of your system into different roles. In the applications that we build, the group information can be used to enable/disable functionality. For, e.g., if your application has the functionality to add new users you might want to restrict this to only users belonging to the administrator role.

Adding new groups can be done using the Azure portal. Select Group Type, Security as it is intended to provide permissions based on roles.

Azure AD Add Group

For the Groups to be returned as part of the claims, the groupMembershipClaims property in application manifest needs to be updated. Setting it to SecurityGroup will return all SecurityGroups of the user.

{
    "groupMembershipClaims": "SecurityGroup"
}

For each group created an ObjectId is assigned to it which is what gets returned as part of the claims. You can either add it as part of your applications config file or use Microsoft Graph API to query the list of groups at runtime. Here I have chosen to keep it as part of the config file.

"AdGroups": [
  {
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
  },
  {
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"
  }
]

Now that we have all the groups and associated configuration setup, we can wire up the .Net Core web application to start using the groups from the claims to enable/disable features. Using the Policy-based authorization capabilities of .Net core application we can wire up policies for all the groups we have.

Role-based authorization and claims-based authorization use a requirement, a requirement handler, and a pre-configured policy. These building blocks support the expression of authorization evaluations in code. The result is a richer, reusable, testable authorization structure.

We have an IsMemberOfGroupRequirement class to represent the requirement for all the groups, the IsMemberOfGroupHandler that implements how to validate a group requirement. The Handler reads the current user’s claims and checks it contains the objectId associated with the Group as a claim. If a match is found the requirement check is marked as a success. Since we want the request to continue to match for any other group requirements the requirement is not failed explicitly.

public class IsMemberOfGroupRequirement : IAuthorizationRequirement
{
    public readonly string GroupId ;
    public readonly string GroupName ;

    public IsMemberOfGroupRequirement(string groupName, string groupId)
    {
        GroupName = groupName;
        GroupId = groupId;
    }
}

public class IsMemberOfGroupHandler : AuthorizationHandler<IsMemberOfGroupRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, IsMemberOfGroupRequirement requirement)
    {
        var groupClaim = context.User.Claims
             .FirstOrDefault(claim => claim.Type == "groups" &&
                 claim.Value.Equals(requirement.GroupId, StringComparison.InvariantCultureIgnoreCase));

        if (groupClaim != null)
            context.Succeed(requirement);

        return Task.CompletedTask;
    }
}

Registering the policies for all the groups in the application’s configuration file and the handler can be done as below. Looping through all the groups in the config we create a policy for each with the associated GroupName. It allows us to use the GroupName as the policy name at places where we want to restrict features for users belonging to that group.

services.AddAuthorization(options =>
{
    var adGroupConfig = new List<AdGroupConfig>();
    _configuration.Bind("AdGroups", adGroupConfig);

    foreach (var adGroup in adGroupConfig)
        options.AddPolicy(
            adGroup.GroupName, 
            policy =>
                policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
});

services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();

Using the policy is now as simple as decorating your controllers with the Authorize attribute and providing the required Policy names on it as shown below.

[Authorize(Policy = "Admin")]
[ApiController]
public partial class AddUsersController : ControllerBase
{
    ....
}

Hope this helps you to setup Role-based functionality for your ASP.Net Core applications using Azure AD as authentication/authorization provider.

Azure AD Custom Attributes and Optional Claims from an ASP.Net Application

Adding and retrieving custom attributes from an Azure AD

When using Azure Active Directory for managing your users, it is a common requirement to add additional attributes to your Users like SkypeId, employee code, EmployeeId and similar. Even though this happens to be a common need, getting this done is not that straightforward. This post describes how you can get additional properties on User objects in Azure AD.

Recently when I had to do this at a client, we had users in Azure AD, the additional property, employeeCode for the user was available in an internal application which had the users Azure email-address mapped to it. We needed these to be synced across to the user Azure AD and make it available as part of claims for a Web site that uses Azure AD authentication

Adding Custom Attribute using Directory Schema Extensions

Azure AD user has a set of default properties, manageable through the Azure Portal. Any additional property to User gets added as an extension to the current user Schema. To add a new property we first need to register an extension. Adding a new extension can be done using the GraphExplorer website. You need to specify the appropriate directory name (e.g., contoso.onmicrosoft.com) and the applicationObjectId. The application object id is the Object Id of the AD application that the Web Application uses to authenticate with Azure AD.

Azure AD supports a similar type of extension, known as directory schema extensions, on a few directory object resources. Although you have to use the Azure AD Graph API to create and manage the definitions of directory schema extensions, you can use the Microsoft Graph API to add, get, update and delete data in the properties of these extensions.

POST https://graph.windows.net/contoso.onmicrosoft.com/applications/
    <applicationObjectId>/extensionProperties?api-version=1.5 HTTP/1.1
{
    "name": "employeeCode<optionalEnvironmentName>",
    "dataType": "String",
    "targetObjects": [
        "User"
    ]
}

The response gives back the fully-qualified extension property name, which is used to write values to the property. Usually the name is of the format extension_<adApplicationIdWithoutDashes>_extensionPropertyName

If you have multiple environments (like Dev, Test, UAT, Prod) all pointing to the same Active Directory, it is a good idea to append the environment name to the extension property. It avoids any bad data issues between environments as all these properties get written to the same User object. You can automate the above step using any scripting language of your choice if required.

Setting Values for Custom Attributes

Now that we have the extension property created on the AD application, we can set the property on the User object. If you want to set this manually, you can use the GraphExplorer website again to do this.

PATCH https://graph.windows.net/contoso.onmicrosoft.com/users
        /jim@contoso.onmicrosoft.com?api-version=1.5
{
    "extension_ab603c56068041afb2f6832e2a17e237_employeeCode<optionalEnvironmentName>": "EMP124"
}

In our case it was not a one-off case of updating the User object, so better wanted this to be automated. Employee codes were available from a database with the associated Azure AD email address. So we created a windows service job that would sync these codes to Azure AD. You can write to Azure AD schema extension properties using Microsoft Graph API. Add a reference to the Microsoft Graph NuGet package, and you are all set to go. For the Graph API to authenticate, use a different Azure AD app (separate to the one that you registered the extension property on, which the web app uses to authenticate), just because it needs additional permissions as well and it is a good idea to isolate that. Under Settings -> Required Permissions, Add Microsoft Graph and provide the relevant permissions for it to write the user’s profile/directory data.

Azure AD Graph API Permissions

private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var clientId = ConfigurationManager.AppSettings["AppId"];
    var secret = ConfigurationManager.AppSettings["Secret"];
    var domain = ConfigurationManager.AppSettings["Domain"];

    var credentials = new ClientCredential(clientId, secret);
    var authContext =
        new AuthenticationContext($"https://login.microsoftonline.com/{domain}/");
    var token = await authContext
        .AcquireTokenAsync("https://graph.microsoft.com/", credentials);

    var graphServiceClient = new GraphServiceClient(new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", token.AccessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}
private async Task UpdateEmployeeCode(
    string employeeCodePropertyName, GraphServiceClient graphApiClient, Employee employee)
{
    var dictionary = new Dictionary<string, object>();
    dictionary.Add(employeeCodePropertyName, employee.Code);

    await graphApiClient.Users[employee.EmailAddress]
        .Request()
        .UpdateAsync(new User()
        {
            AdditionalData = dictionary
        });
}

Looping through all the employee codes, you can update all of them into Azure AD at regular intervals. To verify that the attributes are updated correctly, you can either use the Graph API client to read the extension property or use the Graph Explorer Website.

Accessing Custom Attributes through Claims

With the Azure AD updated with the employee code for each user, we can now set up the AD application to return the additional property as part of the claims, when the web application authenticates with it. The application manifest of the Azure AD application needs to be modified to return the extension property as part of the claims. By default optionalClaims property is set to null and you can update it with the below values.

Azure AD Application Manifest - Optional Claims

"optionalClaims": {
    "idToken": [
      {
        "name": "extension_<id>_employeeCodeLocal",
        "source": "user",
        "essential": true,
        "additionalProperties": []
      }
    ],
    "accessToken": [],
    "saml2Token": []
  },

I updated the idToken property as the .Net Core Web Application was using JWT ID token. If you are unsure of what token you can use Fiddler to find what kind of token is used (as shown below).

Id token returned

With the optonalClaims set, the web application is all set to go. For an authenticated user (with the extension property set), the extension property is available as part of claims. The claim type will be ‘extn.employeeCode’. The below code can be used to extract the employee code from the claim.

public static string GetEmployeeCode(this ClaimsPrincipal claimsPrincipal)
{
    if (claimsPrincipal == null || claimsPrincipal.Claims == null)
        return null;

    var empCodeClaim = claimsPrincipal.Claims
        .FirstOrDefault(claim => claim.Type.StartsWith("extn.employeeCode"));

    return empCodeClaim?.Value;
}
Usually, the claims start flowing through immediately. However, once it did happen to me that the claims did not come for over a long period. Not sure what I did wrong, but once I deleted and recreated the AD application, it started working fine.

Although setting additional properties on Azure AD Users is a common requirement, setting it up is not that straight-forward. Hope the portal improves someday, and it would be as easy as setting a list of key-value properties as extension properties, and it would all seamlessly flow through as part of the claims. However, till that day, hope this helps you to set up extra information on your Azure AD users.

Setting up DbUp in Azure Pipelines

DbUp in a .Net core console application and Azure Pipelines.

Azure Pipelines is part of the Azure Devops offerings which enables you to continuously build test and deploy to any platform and cloud environments. It’s been a while since this has been out and it’s only recently that I have got a chance to play around with it at one of my clients. We use DBUp, a .Net library to deploy schema changes to our SQL Server database. It tracks which SQL scripts have been run already, and runs the change scripts that are needed to get your database up to date.

Setting up DbUp is very easy, and you can use the script straight from the docs to get started. If you are using .Net core console application VS template to setup DbUp make sure to modify the return type of the main function to use int and to return the appropriate application exit codes (as from the script in the doc.) I made the mistake of removing the return statements, only to later realize that build scripts were successfully passing even though the DbUp scripts were failing.

If you are using the .Net Core console application VS template (like I did) make sure you modify the return type of the main function in Program.cs to int.

In Azure Pipelines I have the build step publish the build output as a zip artifact. Using this in the release pipeline is a 2 step process

1 - Extract Zip Package

Using the Extract Files Task extract the zip package from the build artifacts. You can specify a destination folder for the files to be extracted to (as shown below).

Extract package

2 - Execute DbUp Package

With the package extracted out into a folder, we can now execute the console application (using the dotnet command line) by passing in the connection string as a command line argument.

Execute package

You now have your database deployments automated through the Azure Pipelines.

With Azure Pipelines you can continuosly build, test and deploy to any cloud platform. Azure Pipelines has multiple options to start based on your project. Even if you are developing a private application, Pipelines offers you 1 Free parallel job with upto 1800 minutes per month and also 1 Free self hosted with unlimited months (as it’s anyway running on your infrastructure).

On the Microsoft-hosted CI/CD with 1800 minutes you might need to find the used/remaining time any time during the month. You can find the remaining minutes from the Azure Devops portal and select the relevant organization.

Organization settings -> Retention and parallel jobs -> Parallel Jobs

Azure Devops Pipelines - Remaining Build Minutes

Hope that helps you find the remaining free build minutes for your organization!

← Previous Posts