Building Windows Service Installer on Azure Devops

Continuosly building a windows installer on Azure DevOps using VdProj or WIX

Recently I was looking into packaging a Windows Service as an MSI installer. I wanted the MSI created in the build pipeline, in this case Azure DevOps, and publish the MSI as a build artifact. The windows service uses .Net Framework and looking around for installer options I found mainly two approaches discussed below.

Visual Studio Installer Projects (*.VdProj)

Microsoft Visual Studio Installer Projects is available as an extension to Visual Studio and provides support for Visual Studio Installer Projects in Visual Studio. By adding this setup project to the solution, you can create a setup file that steps through a wizard and installs your application. If you are looking to how to set up the Installer project, this stackoverflow answer shows you exactly how. Once you have the installer project set up locally and have the MSI file generated on building solution, we can set this up in Azure DevOps pipeline and automate it.

The Visual Studio Installer Projects require a custom build agent.

The only way I could find to get the Installer Project to run and build out an MSI file was to set up a custom build agent. Hosted agents do not support this at the moment. I set the custom agent on a Windows machine and have not tried on any of the other variants. The only tricky thing with setting up the custom agent was step 4 under Prepare Permissions. To find the scope ‘Agent Pools (read, manage)’, make sure you click the ‘Show all/less scopes’ link towards the bottom of the page (as shown in the image below) - At times some things just miss your eyes! Rest was pretty straightforward, and you can have the custom build agent set up in minutes.

Azure DevOps - Custom Agent token setup

In your build pipeline definition make sure to select the new custom agent as your default Agent pool. The Build VS Installer is a custom task that can be used to build your Visual Studio Installer projects (.vdproj files). Since MSBuild cannot be used to build these projects you need to make sure you have Visual Studio installed on the agent with the Installer Projects extension installed. Setting up the custom task is straightforward - you can either choose to build just one particular installer-project in the solution or all of them in the solution.

Azure Devops - Build Pipeline

I ran into the error message An error occurred while validating. HRESULT = ‘8000000A’, when running this build through the pipeline. Soon figured out that this was faced by others in the past. Running the DisableOutOfProcBuild.exe solved the issue. To do this in the pipeline add a command line task (Set EnableOutOfProcBuild step in the image above) and use the scripts based on the appropriate VS version.

Make sure to either select the ‘Create artifact for .msi’ option in the custom build task or manually copy it out to the artifacts directory. The build now generates an MSI every time!

WIX

WiX is an open source project that provides a set of tools that build Windows Installation Packages. The installer packages are XML based, and the learning curve is relatively steep. However, it offers a lot more features and capabilities over the Visual Studio installer project we saw above. Microsoft hosted agents support building WIX projects, and I was able to successfully run them on the Hosted VS2017 agent.

WIX projects can run on the Hosted VS2017 agent. Just this one reason makes WIX a far better choice over VdProj if you are starting fresh.

Azure Devops WIX

If you are running on a custom build agent, you will have to install the Wix toolset for everything to work. The default build task in Azure DevOps is all that is required to build the project as WIX integrates well with MSBuild. As you can see WIX it is easier setup and lesser hassles, so definitely recommend using that path if you are not already with VdProj.

Hope this helps you set up building installer projects on Azure DevOps.

Brisbane has a lot of places to go around especially those you can cover in a day. We usually prefer starting in the morning at about 8 am and reach back by around 2-3pm. Most of the places we carried food and do kind of a small picnic and my son, Gautham enjoys it a lot as we do. This is all possible because of my wife, Parvathy and special kudos to her culinary skills.

TLDR;

Lake Moogerah

Lake Moogerah makes an excellent place for a day trip or even camp overnight with its scenic beauty and activities around. You can go boating, take a stroll over the Moogerah Dam Wall or hike up the mountains for a great view. This place has got everything in one spot and makes a perfect place for the entire family.

Lake Moogerah

Venman Bushland

Venman Bushland National Park is still one of my favorites walks around Brisbane. The park is also home to a lot of wildlife, and you might be lucky to see some if you keep your eyes open. We spotted a wallaby towards the end of the walk.

Venman Bushland National Park

Gold Coast and Sunshine Coast

Gold Coast needs no introduction. If you are in Brisbane, there is every chance that you have already heard and been there. There is something here for everyone. Beaches, surfing, theme parks are to name a few. There are around five theme parks which in itself takes a day each. Getting a yearly pass helps and you can go back as many times as you want. Sealife and Wet’n’Wild are the ones that we go often.

Gold Coast

Head over to the opposite direction of Gold Coast, and you can reach Sunshine Coast which also has a lot to offer for a day tripper. The Sealife is the right place for a day trip, and it has a Octonauts zone which is one of Gautham’s favorite.

Noosa Heads

Surrounded by beach, river, hinterland and national parks, Noosa provides a wide range of activities and adventures. Check out the Noosa Markets if you are there on a Sunday.

Noosa Heads

Tin Can Bay

If you fancy feeding wild dolphins, Tin Can Bay is the place to do that. It does get a bit crowded (even on a weekday), but everybody gets a chance. You might have to start early if you want to make it by 7 in the morning at the center. We stayed there overnight and clubbed Noosa Heads on the way. On the way back we went to the Rainbow beach, where you can drive along the beach if you are interested.

Tin Can Bay

Great White Rock

At the Great White Rock, you can enjoy a wide range of activities including hiking, bird-watching, horse riding, mountain bike riding etc to name a few. There are multiple hiking trails and makes it perfect for all ages.

Great White Rock

Mt Coot-tha

Located close to the city, Mount Coot-tha has a lot to provide. Don’t miss the scenic views from the lookout especially great during sunrise and sunset. There are also multiple bushwalking trails including a kids trail.. There is a Planetarium located in the Brisbane Botanic Gardens which has various shows and activities. The lookout is also a good ride up from the city if you are into cycling.

Mount Coot Tha

Glasshouse Mountains

Glass House Mountains are remnants of volcanic activity and these volcanic mountains is a perfect day trip location. Good trails and lookouts along the way makes the drive there an enjoyable one as well.

Glasshouse Mountains

Tamborine

The Tamborine mountains has a lot to offer and will make you come back for more. Lots of different trails, Skywalk, Glow Worm Caves, waterfalls are just a few. The glow worm caves is a unique experience and worth visiting and helping them serve a cause of protecting the species.

Tamborine

Springbrook

Standing on top of an ancient volcano, Springbrook is just an hour drive from Brisbane and has views that stretch forever. You can see some of the oldest trees in Australia, cooling swimming holes and walking trails. Don’t miss out on the Natural Bridge, a picturesque rock formation, formed naturally by the waterfall over the basalt cave.

Springbrook

Nerima Gardens

Nerima Gardens are the Japanese Gardens of Ipswich and makes it an excellent getaway for the family. Right next to the gardens is the Ipswich Nature Center which houses a variety of animals and birds. Admission is free to these parks which makes it even better (however hey really appreciate some donations)

Nerima Gardens

Mt Nebo and Mt Glorious

Mount Glorious and Mount Nebo and know for their bushwalking trails. The mountains are next to each other however best enjoyed over multiple days. There are tracks of varying levels making it suitable for people of all fitness levels.

Mount Glorious and Mount Nebo

Eat Street

With great city and river views and open only from Friday to Sunday, the Eat Street is a unique experience that you can get in Brisbane. Lots of food options and entertainments make this a lively place. Make sure to check out their site for special events, jumping castles, etc to keep the little ones in the family busy. There is a small entry fee, but make sure you get stamped if you are going out and want to return the same day.

Eat Street

Carseldine Markets

The Carseldine Farmers and Artisan Markets is a great way to spend your Saturday morning checking out the local produce, arts, and crafts along with some good food and coffee. There is no entry fee and has lots of parking as well. There are a lot of other similar markets as well around Brisbane. A quick googling should help you find out the ones nearer to you.

Carseldine Markets

Always make sure to check out the place details and general tips before heading out especially if you are hiking. Carry enough water, sunscreen, insect repellants, etc. I take along the Camelbak Octane XCT on such trips which hold enough water for three of us. Check out the hiking checklist for more detailed instructions.

Enjoy your weekends and sound off in the comments what other places you recommend checking out in and around Brisbane.

Over the last week, I have been reading the book Digital Minimalism by Cal Newport. The central idea of the book is about being aware of the various technologies affecting our lives and making a conscious effort to choose those are required and adds value to your life.

Minimalism is the art of knowing how much is just enough. Digital Minimalism applies this idea to our personal technology. It’s the key to living a focused life in an increasingly noisy world. - Cal Newport

State of My Online Life

Before starting with the book, I have to admit that my online life was not that ordered and thought through. However I was aware of the social media application’s taking a significant part of my time, casually browsing without giving any value in return. I had intentionally stopped using Facebook for a couple of months, and now I am completely off it. Instagram was soon to follow except for some occasional posting. I had uninstalled both apps from my phone as it was the main access points to these sites. I had intentionally turned off all notifications on the phone for a long time and found it really helpful.

However, what I was not aware of was with these two applications gone I soon started relying on other apps to fill in its place. I am into running and found myself spending more time on Strava. For the social part, I started being more into WhatsApp, YouTube, LinkedIn, and Twitter. When nothing was there, I was hanging on to the email applications pulling to see if anything interesting came in (as if I was expecting a million dollars email). Only after starting with the book did I start realizing that these apps had taken over the ones that I had given up.

Digital Declutter

The title of the book, ‘Digital Minimalism’, immediately caught my attention when I first saw it and I was keen to read it. Primarily I wanted to reduce my phone usage as most of my time was going away there. Going through the digital declutter phase, I took note of all the technologies and application that are on my phone. The Screen Time feature on iPhone (you can use Digital Wellbeing if you are on an Android or install RescueTime application) started making me more aware of the time that I have been spending on phone and the apps that took most of my time.

Screen Time Report on Iphone

I realized that a majority of my time was spent on WhatsApp especially on Group Chats and scrolling through all the forwarded messages/videos in them and always checking back for more content. Even though I have a Kindle, I was reading more on the Kindle app on phone. Most of the time when reading I would be distracted by something else and wander off to a different app. Even though I had notification turned off new features like Badges took its place which started pulling me again into the apps.

After noting done all the apps and analyzing them, I started decluttering my phone.

  • Exited all Whatsapp Group Chats
  • Removed the below apps
    • Emails (Gmail and Outlook)
    • Slack
    • Strava
    • Yammer
    • LinkedIn
    • Kindle
  • Disabled Badges notification
  • Disabled (Raise to Wake)[https://support.apple.com/en-au/HT208081]: This is one of the features that lure you to look into the phone even if you did not intend to do.
  • Microsoft Teams: Initially I had removed Teams, but realized that was the only way to communicate with my Readify team members quickly. So decided to install it back.

By deleting the Kindle app, I am looking to force myself not to use the phone for reading books. I am keen to try out the idea of reading physical books and take notes while reading, an idea that struck me while reading the Bullet Journal blog (This is also where I came across the book ‘Digital Minimalism’ the first time.). I was in India recently and took advantage of getting books for a much lower price and gifted myself some self-help books and a few others.

Books

Interestingly I also came across Bullet Journaling at the same time, which aligns with Digital Minimalism as it forces you to sync to paper your ideas, thoughts, and to-dos as opposed to a digital system. I have started trying this out alongside and find it helpful — more on it in a different blog post.

Bullet Journal

Digital Minimalism Is An Ongoing Process

Digital Minimalism is not a one-time activity, but something to perform on an ongoing basis and any time you think of adding a new technology into your life. It has just been over a week that I have started decluttering my online life, and I am already finding benefits. I pick up my phone less often and have lesser a mental load to keep track off.

Phone Pickups after Decluttering

I plan to do the same decluttering process with my laptop once I get into the flow of the process. Decluttering is a great way to bring in more focus in your life and gives you a lot more time than you previously had. How decluttered is your online life?

Tip of the Week: Squoosh - Make Images Smaller

Compress the images that you share online.

A while back I wrote about PNGGauntlet, a windows application that allows reducing image size if they are in PNG format. If you are looking for something that can handle any image format and available through the browser then check out Squoosh.

Squoosh is an image compression web app that allows you to dive into the advanced options provided by various image compressors.

Squoosh offers multiple compression options and defaults to MozJPEG. It also gives some advanced settings that you can use to play around to find one that best suite for your needs. The default compression settings itself provides a huge benefit (69% reduction in size) as in the image above. I use this primarily for the images that I share on this blog.

Squoosh your images!

Windows Service Using Topshelf, Quartz and Autofac

Walkthrough of setting up a recurrent job scheduler.

Whenever there is a need for some automated jobs that have to run On-Prem for a client, my default choice has been to use Windows Service along with Quartz. Last year I had blogged about one such instance. However, I did not get into the detail of setting up the project and associated dependencies to run the service.

In this post, I will walk through how I got about setting up these recurrent job scheduler to make it easy for me or anyone else who runs into a similar situation.

Topshelf

Topshelf makes the creation of windows services easy by giving the ability to run it as a console application while developing it and easily deploy it as a service. Setting up Topshelf is straight forward - All you need is a console application (targetting the .Net Framework) and add reference to Topshelf Nuget package. To set up the service, you need to modify the Program.cs with some setup code for creating the windows services and setting some service metadata.

Autofac

With Topshelf setup, we have a running windows service application. To add in dependency injection so that you do not have to wire up all your dependencies manually, you can use Autofac. The Topshelf.Autofac library helps integrate Topshelf and Autofac DI container. Autofac can be easily integrated with Topshelf using the library and passing in the container instance to the UseAutofacContainer extension method on HostConfigurator.

var container = Bootstrapper.BuildContainer();

var rc = HostFactory.Run(x =>
{
    x.UseAutofacContainer(container);
    
    x.Service<SchedulerService>(s =>
    {
        s.ConstructUsingAutofacContainer();
        s.WhenStarted(tc => tc.Start(config));
        s.WhenStopped(tc => tc.Stop());
    });

});

Quartz.Net

The SchedulerService will now be instantiated using the Autofac container and makes it easy to inject dependencies into it. We need to be able to schedule jobs within the SchedulerService hence inject an IScheduler from Quartz.Net. You can add a reference to Quartz Nuget package, and you are all set to run jobs on schedule. To integrate Quartz with Autofac so that job dependencies can also be injected in via the container we need to use Autofac.Extras.Quartz Nuget.

Wiring it Up

Below is a sample setup of the Autofac container that registers jobs (MySyncJob) in assembly and adds scheduler instance to the container (using the QuartzAutofacFactoryModule). The IDbConnection is registered to match the lifetime scope of quartz job so that each job instance gets a different instance.

public static class Bootstrapper
{
    public static IContainer BuildContainer()
    {
        var builder = new ContainerBuilder();
        builder.RegisterType<SchedulerService>();

        var schedulerConfig = new NameValueCollection
        {
            { "quartz.scheduler.instanceName", "MyScheduler" },
            { "quartz.jobStore.type", "Quartz.Simpl.RAMJobStore, Quartz" },
            { "quartz.threadPool.threadCount", "3" }
        };

        builder.RegisterModule(new QuartzAutofacFactoryModule
        {
            ConfigurationProvider = c => schedulerConfig
        });

        builder.RegisterModule(new QuartzAutofacJobsModule(typeof(MySyncJob).Assembly));

        var connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;
        builder
           .RegisterType<SqlConnection>()
           .WithParameter("connectionString", connectionString)
           .As<IDbConnection>()
           .InstancePerMatchingLifetimeScope(QuartzAutofacFactoryModule.LifetimeScopeName);
       
        // Other registrations

        var container = builder.Build();
        return container;
    }
}

The SchedulerService class is used to start the scheduler jobs when the service starts up and shut down the jobs when service is shut down. As you can see the IScheduler instance is constructor injected using Autofac. On start, add the jobs to the scheduler(i am using a cron schedule in the example below).

public class SchedulerService
{
    private readonly IScheduler _scheduler;

    public SchedulerService(IScheduler scheduler)
    {
        _scheduler = scheduler;
    }

    public void Start(ScheduleConfig config)
    {
        ScheduleJobs(config);
        _scheduler.Start().ConfigureAwait(false).GetAwaiter().GetResult();
    }

    private void ScheduleJobs(ScheduleConfig config)
    {
        ScheduleJobWithCronSchedule<MySyncJob>(config.MySyncJobSchedule);
        ScheduleJobWithCronSchedule<MyOtherSyncJob>(config.MyOtherSyncJobSchedule);
    }

    private void ScheduleJobWithCronSchedule<T>(string cronShedule) where T : IJob
    {
        var jobName = typeof(T).Name;
        var job = JobBuilder
            .Create<T>()
            .WithIdentity(jobName, $"{jobName}-Group")
            .Build();

        var cronTrigger = TriggerBuilder
            .Create()
            .WithIdentity($"{jobName}-Trigger")
            .StartNow()
            .WithCronSchedule(cronShedule)
            .ForJob(job)
            .Build();

        _scheduler.ScheduleJob(cronTrigger);
    }

    public void Stop()
    {
        _scheduler.Shutdown().ConfigureAwait(false).GetAwaiter().GetResult();
    }
}

The Sync jobs have its dependencies which are again injected using Autofac container. Adding in new jobs is easy, and all we need to make sure is that it gets set up with the appropriate schedule and register its dependencies in the container.

Hope this helps you with setting up recurring scheduler jobs for on-prem scenarios.

Migrating Octopress To Hugo

Migrated my blog again - Here's how I went about doing it.

I have been on Octopress blogging platform for around 5 years and was fairly happy with it. I had optimized Octopress workflow for new posts, set it up for continuous delivery and also enabled scheduling posts in the future.

I have been wanting to migrate off Octopress since a while (reasons below) but have been putting it off since I did not want to go through another migration pain. Now that it is all done, the migration was not as hard as I thought. In this post I will walk through the reasons of migrating away from Octopress, the actual migration steps involved and tweaking the default Hugo settings/theme and workflow to get what I wanted.

Reasons To Migrate

  • No Longer Maintained: Octopress is no longer maintained by anyone and it’s hard to keep up with all the dependent library updates and ruby version changes. I have my builds breaking randomly for dependent package updates and it was not something I liked dealing with.

  • Terribly Slow: To build by full site Octopress takes around 2 minutes. Since I have modified my workflow to build only the draft posts when in local I usually don’t have to wait that long, but still it’s slow.

These two reasons were pressing enough to migrate off Octopress. Hugo was the natural choice for its speed and community and is the next highest rated after Jekyll(/Octopress). I also chose to migrate away from Azure Hosting and use Netlify to host this blog.

Why Netlify? With Octopress I had my build pipeline push the generated site contents back into GitHub (to a separate branch) and then have Azure deploy that branch automatically using Github trigger. If I were to remain on Azure, I would have to do almost the same. Netlify comes with a hugo template. The template is automatically detected when pointed to your repository and sets up all that is required to deploy the generated static content. All I had to update was to set the correct Build Environment Variable for HUGO_VERSION.

Netlify can host your Hugo site with CDN, continuous deployment, 1-click HTTPS, an admin GUI, and its own CLI.

I moved this site over to HTTPS a while back and had been using Cloudflare’s Shared SSL. Moving over to the free Let’s Encrypt certificate required additional setup on Azure. Netlify takes out all this complexity and handles this all for you in the background. Once you set up a custom domain, it’s provisioned with a Let’s Encrypt certificate.

Migration

The actual migration of the content was mostly related to moving all the files and fixing up some code blocks.

  • Move Files Moving files was easy, as both platforms support Markdown. Everything from your source folder maps into the content folder in Hugo. Pages that were in folders in Octopress are now Markdown files with the appropriate name in Hugo. The actual posts in Markdown had the date appended in them. Using a PowerShell script, I stripped off the first 11 characters (YYYY-MM-DD-) and moved them into the blog folder (since all my blog posts are under the /blog URL path). All my images live under the static folder. Octopress to Hugo - Files

  • Fixing Code Blocks Octopress supported adding a custom title on the code block which is not available out of the box in Hugo. You can use a custom shortcode to set this up. However, I chose to remove them as there were only a few code blocks that had them. Using regex search in VSCode, it’s easy to get rid of them at once.

Configuring Hugo

With all the content ported over all that was left was to select a theme a customize it, make sure all existing URLs work on the new site, configure search and a few other things.

After a bit of hunting around for themes, I decided to go with Minimo for its simplicity and supporting most of the configurations of Hugo. All my theme overrides are in the layouts folder. I have added in support for showing a paged list of all my blog posts and a few layout changes for list views and headers. Added in a google custom search engine support as a widgetand added it to the sidebar. Set up the 404 custom page to enable searching site for content.

The site is now running on Hugo + Netlify. Building the whole site (not just the drafts) takes around 3-4 seconds (cold build) and on every file change after the build watcher is running takes around 200 milliseconds. Hugo is blazing fast. If you face any issues, have any feedback kindly drop a comment or send me a tweet.

2018: What Went Well, What Didn't and Goals

A short recap of the year that is gone by and looking forward!

Posts per month - 2016

Another year has gone by so fast, and it is again time to do a year review.

TLDR;

2018 saw most of my time in running, cycling and learning to swim. Regular exercise and healthy eating helped maintain my weight. Lots of travel added to the excitement. Blogging, Reading, Photography and Open source took the back seat and did not go as planned. Looking forward to 2019!

What went well

Running, Cycling and Learning to Swim

As planned last year I did two half marathon events - Brisbane Great South Run and Springfield Half Marathon. I could also do a Sub 50 10k run after many tries. Even though running a marathon was part of one of the goals it did not happen. Cycling was on and off, and towards the year end, I started commuting to work (around 10k one way). However, it lasted only around 2 months as I changed client, for which I had to commute to Gold Coast (around 70km, once a week). With swimming, I still struggle to do more than two laps at a stretch. The goal was to swim 1km by the end of 2018 but looks like that’s a long way to go. I was often lazy and not motivated to go out to the pool to practice.

I can spend all this time on these activities because of the full support from my wife Parvathy.

Year in Sport

Travel

We covered a couple of major tourist destinations around Australia - Great Barrier Reef (Cairns), Gold Coast, Frazer Island and Tasmania being the top highlights. We also did a lot of hikes and short trips around Brisbane including Tin Can Bay and Ballina. Snorkeling in the Great Barrier Reef, Penguin Tour in Tasmania, Whale watching, beach highway drive, and 7 seater beach flight in Frazer Island, dolphin feeding at Tin Can Bay, Croc attack show at Hartley’s Crocodile Park were some of the new experiences in life and enjoyed it a lot.

Community

I did two public talks, one at the .Net User Group on Azure Key Vault and one at Readify Back2Base on Ok I Have Got HTTPS! What Next?. Two is still a smaller number and is something I want to do more of in the coming years.

Talk at .Net User Group

I created a youtube pip video on Key Vault Connected Service, and wanted to do couple more of such videos but did not get prioritized enough over the other things. The significant open source contribution was to KeyVault configuration builder that is part of the ASP.Net framework.

What didn’t go well

  • Blogging Blogging took a back seat this year, even though I had set out to do 4 posts a month. I did a total of 29 posts that’s roughly over 2 posts a month. Some months I missed out on it completely.

Posts per month in the year 2018

  • Reading 20 books was the goal for this year however ended up reading only 6 books. 8020 Running helped me a lot with my running.

  • Open Source I wanted to involve with one Open Source project actively, but the only thing that happened was some minor contributions.

  • Learning Most of the learning was at work, and I did not actively take up learning anything new. I did start with Category Theory and FSharp again, but it did not stick for a long time.

Goals for 2019

  • Reading Read 10 books

  • Blogging 2 posts a month and stay consistent all months

  • Tri Sports Complete a Marathon, couple of half marathon event, at least one cycling event and 500m swim.

  • Learning Become more proficient with JavaScript. Build Azure Key Vault Explorer while learning.

I have been on and off my planning and routines. Looking to get back with this and organize my day-to-day activities a bit more and be more accountable for the goals that I have set.

Wishing you all a Happy and Prosperous New Year!

Query Object Pattern and Entity Framework - Making Readable Queries

Using a Query Object to contain large query criteria and iterating over the query to make it more readable.

Search is a common requirement for most of the applications that we build today. Searching for data often includes multiple fields, data types, and data from multiple tables (especially when using a relational database). I was recently building a Search page which involved searching for Orders - users needed the ability to search by different criteria such as the employee who created the order, orders for a customer, orders between particular dates, order status, an address of delivery. Order criteria are optional, and they allow to narrow down on your search with additional parameters. We were building an API endpoint to query this data based on the parameters using EF Core backed by Azure SQL.

In this post, we go through the code iterations that I made to improve on the readability of the query and keep it contained in a single place. The intention is to create a Query Object like structure that contains all query logic and keep it centralized and readable.

A Query Object is an interpreter [Gang of Four], that is, a structure of objects that can form itself into a SQL query. You can create this query by referring to classes and fields rather than tables and columns. In this way, those who write the queries can do so independently of the database schema, and changes to the schema can be localized in a single place.

// Query Object capturing the Search Criteria
public class OrderSummaryQuery
{
    public int? CustomerId { get; set; }
    public DateRange DateRange { get; set; }
    public string Employee { get; set; }
    public string Address { get; set;}
    public OrderStatus OrderStatus { get; set; }
}

I have removed the final projection in all the queries below to keep the code to a minimum. We will go through all the iterations to make the code more readable, keeping the generated SQL query efficient as possible.

Iteration 1 - Crude Form

Let’s start with the crudest form of the query stating all possible combinations of the query. Since all properties are nullable, check if a value exists before using it in the query.

(from order in _context.Order
join od in _context.OrderDelivery on order.Id equals od.OrderId
join customer in _context.Customer on order.CustomerId equals customer.Id
where order.Status == OrderStatus.Quote &&
      order.Active == true &&
      (query.Employee == null || 
      (order.CreatedBy == query.Employee || customer.Employee == query.Employee)) &&
      (!query.CustomerId.HasValue ||
      customer.Id == query.CustomerId.Value) &&
      (query.DateRange == null || 
      order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate))

Iteration 2 - Separating into Multiple Lines

With all those explicit AND (&&) clauses the query is hard to understand and keep up. Splitting them into multiple where clauses make it more cleaner and keep each search criteria independent. The end SQL query that gets generated remains the same in this case.

Aesthetics of code is as important as the code you write. Aligning is an important part that contributes to the overall aesthetics of code.

from order in _context.Order
join od in _context.OrderDelivery on order.Id equals od.OrderId
join customer in _context.Customer on order.CustomerId equals customer.Id
where order.Status == orderStatus && order.Active == true
where query.Employee == null ||
      order.CreatedBy == query.Employee || customer.Employee == query.Employee
where !query.CustomerId.HasValue || customer.Id == query.CustomerId.Value
where query.DateRange == null ||
      (order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate)

Iteration 3 - Refactor to Expressions

Now that each criterion is independently visible let’s make each of the where clause more readable. Refactoring them into C# class functions makes the generated SQL inefficient, as EF cannot transform C# functions into SQL. Such conditions in a standard C# function gets evaluated on the client site, after retrieving all data from the server. Depending on the size of your data, this is something you need to be aware of.

However, if you use Expressions those get transformed to evaluate on the server. Since all of the conditions on our where clauses can be represented as an Expression, let’s move those to the Query object class as properties returning Expressions. Since we need data from multiple tables, the intermediate projection OrderSummaryQueryResult helps to work with data from the multiple tables. All our expressions take the OrderSummaryQueryResult projection and perform the appropriate conditions on them.

public class OrderSummaryQuery
{
    public Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser
    {
        get
        {
            return (a) => Employee == null ||
                      a.Order.CreatedBy == Employee || a.Customer.Employee == Employee;
        }
    }

    public Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
    public Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
    public Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
}
(from order in _context.Order
 join od in _context.OrderDelivery on order.Id equals od.OrderId
 join customer in _context.Customer on order.CustomerId equals customer.Id
 select new OrderSummaryQueryResult() 
    { Customer = customer, Order =    order, OrderDelivery = od })
.Where(query.IsActiveOrder)
.Where(query.BelongsToUser)
.Where(query.ForCustomer)
.Where(query.InDateRange)
-- Generated SQL when order status and employee name is set
SELECT [customer].[Name] AS [Customer], [order].[OrderNumber] AS [Number],
       [od].[Address], [order].[Created] AS [CreatedDate]
FROM [Order] AS [order]
INNER JOIN [OrderDelivery] AS [od] ON [order].[Id] = [od].[OrderId]
INNER JOIN [Customer] AS [customer] ON [order].[CustomerId] = [customer].[Id]
WHERE (([order].[Active] = 1) AND ([order].[Status] = @__OrderStatus_0)) AND 
      (([order].[CreatedBy] = @__employee_1) OR ([customer].[Employee] = @__employee_2))
If you use constructor initialization for intermediate projection, *OrderSummaryQueryResult* the where clauses gets executed on the client side. So use the object initializer syntax to create the intermediate projection.

Iteration 4 - Refactoring to Extension method

After the last iteration, we have a query that is easy to read and understand. We also have all queries consolidated within the query object, and it acts as a one place holding all the queries. However, something still felt not right, and I had a quick chat with my friend Bappi, and we refined it further. The above query has too many where clauses and it was just repeating for each of the filters. To encapsulate this further, I moved all the filter expressions to be returned as an Enumerable and wrote an extension method, ApplyAllFilters, to execute them all.

// Expose one property for all the filters 
public class OrderSummaryQuery
{
    public IEnumerable<Expression<Func<OrderSummaryQueryResult, bool>>> AllFilters
    {
        get
        {
            yield return IsActiveOrderStatus;
            yield return BelongsToUser;
            yield return BelongsToCustomer;
            yield return FallsInDateRange;
        }
    }

    private Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser...
    private Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
    private Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
    private Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
}

... 

// Extension Method on IQueryable
{
    public static IQueryable<T> ApplyAllFilters<T>(
        this IQueryable<T> queryable,
        IEnumerable<Expression<Func<T, bool>>> filters)
    {
        foreach (var filter in filters)
            queryable = queryable.Where(filter);

        return queryable;
    }
}
{
    (from order in _context.Order
    join od in orderDeliveries on order.Id equals od.OrderId
    join customer in _context.Customer on order.CustomerId equals customer.Id
    select new OrderSummaryQueryResult() { Customer = customer, Order = order, OrderDelivery = od })
    .ApplyAllFilters(query.AllFilters)
    
    ...
}

The search query is much more readable than what we started with in Iteration 1. One thing you should always be careful about with EF is making sure that the generated SQL is optimized and you are across what gets executed on the server and the client. Using a SQL Profiler or configure logging to see the generated SQL. You can also configure to throw an exception (in your development environment) for client evaluation.

Hope this helps to write cleaner and readable queries. Sound off in the comments if you have thoughts on refining this further or of any other patterns that you use.

Exclude Certain Scripts From Transaction When Using DbUp

Certain commands cannot run under a transaction. See how you can exclude them while still keeping your rest of the scripts under transaction.

Recently I had written about Setting Up DbUP in Azure Pipelines at one of my clients. We had all our scripts run under Transaction Per Script mode and was all working fine until we had to deploy some SQL scripts that cannot be run under a transaction. So now I have a bunch of SQL script files that can be run under a transaction and some (like the ones below - Full-Text Search) that cannot be run under a transaction. By default, if you run this using DbUp under a transaction you get the error message CREATE FULLTEXT CATALOG statement cannot be used inside a user transaction and this is an existing issue.

CREATE FULLTEXT CATALOG MyCatalog
GO

CREATE FULLTEXT INDEX 
ON  [dbo].[Products] ([Description])
KEY INDEX [PK_Products] ON MyCatalog
WITH CHANGE_TRACKING AUTO
GO

One option would be to turn off transaction all together using builder.WithoutTransaction() (default transaction setting) and everything would work as usual. But in case you want each of your scripts to be run under a transaction you can choose either of the options below.

Using Pre-Processors to Modify Script Before Execution

Script Pre-Processors are an extensibility hook into DbUp and allows you to modify a script before it gets executed. So we can wrap each SQL script with a transaction before it gets executed. In this case, you have to configure your builder to run WithoutTransaction and modify each script file before execution and explicitly wrap with a transaction if required. Writing a custom pre-processor is quickly done by implementing the IScriptPreprocessor interface, and you get the contents of the script file to modify. In this case, all I do is check whether the text contains ‘CREATE FULLTEXT’ and wrap with a transaction if it does not. You could use file-name conventions or any other rules of your choice to perform the check and conditionally wrap with a transaction.

public class ConditionallyApplyTransactionPreprocessor : IScriptPreprocessor
{
    public string Process(string contents)
    {
        if (!contents.Contains("CREATE FULLTEXT", StringComparison.InvariantCultureIgnoreCase))
        {
            var modified =
                $@"
BEGIN TRANSACTION   
BEGIN TRY
           {contents}
    COMMIT;
END TRY
BEGIN CATCH
    ROLLBACK;
    THROW;
END CATCH";

            return modified;
        }
        else
            return contents;
    }
}

Using Multiple UpgradeEngine to Deploy Scripts

If you are not particularly fine with tweaking the pre-processing step and want to use the default implementations of DbUp and still achieve keep transactions for you scripts where possible, you can use multiple upgraders to perform the job for you. Iterate over all your script files and then partition them into batches of files that need to be run under a transaction and those that can’t be run under a transaction. As shown in the image below you will end up with multiple batches with alternating sets of transaction/non-transaction set of scripts. When performing the upgrade over a batch, set the WithTransactionPerScript on the builder conditionally. If any of the batches fail, you can terminate the database upgrade.

Script file batches

{
    Func<string,bool> canRunUnderTransaction = (fileName) => !fileName.Contains("FullText");
    Func<List<string>, string, bool> belongsToCurrentBatch = (batch, file) =>
		batch != null &&
        canRunUnderTransaction(batch.First()) == canRunUnderTransaction(file);
    
    var batches = allScriptFiles.Aggregate
        (new List<List<string>>(), (current, next) =>
            {
                if (belongsToCurrentBatch(current.LastOrDefault(),next))
                    current.Last().Add(next);
                else
                    current.Add(new List<string>() { next });

                return current;
            });

    foreach (var batch in batches)
    {
        var includeTransaction = !batch.Any(canRunUnderTransaction);

        var result = PerformUpgrade(batch.ToSqlScriptArray(), includeTransaction);

        if (!result.Successful)
        {
            Console.ForegroundColor = ConsoleColor.Red;
            Console.WriteLine(result.Error);
            Console.ResetColor();
            return -1;
        }
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();
    return 0;
}

private static DatabaseUpgradeResult PerformUpgrade(
    SqlScript[] scripts,
    bool includeTransaction)
{
    var builder = DeployChanges.To
        .SqlDatabase(connectionString)
        .WithScripts(scripts)
        .LogToConsole();

    if (includeTransaction)
        builder = builder.WithTransactionPerScript();

      var upgrader = builder.Build();

    var result = upgrader.PerformUpgrade();

    return result;
}

Keeping all your scripts in a single place and automating it through the build-release pipeline is something you need to strive for. Hope this helps you to continue using DbUp even if you want to execute scripts that are a mix of transactional and non-transactional.

.Net Core Web App and Azure AD Groups Role based access

Use Azure AD groups to enable/disable functionality for your users based on their Roles.

Getting your application to provide capabilities based on the role of the User using the system is a common thing. When using Azure Active Directory (AD), the Groups feature allows organizing users of your system into different roles. In the applications that we build, the group information can be used to enable/disable functionality. For, e.g., if your application has the functionality to add new users you might want to restrict this to only users belonging to the administrator role.

Adding new groups can be done using the Azure portal. Select Group Type, Security as it is intended to provide permissions based on roles.

Azure AD Add Group

For the Groups to be returned as part of the claims, the groupMembershipClaims property in application manifest needs to be updated. Setting it to SecurityGroup will return all SecurityGroups of the user.

{
    "groupMembershipClaims": "SecurityGroup"
}

For each group created an ObjectId is assigned to it which is what gets returned as part of the claims. You can either add it as part of your applications config file or use Microsoft Graph API to query the list of groups at runtime. Here I have chosen to keep it as part of the config file.

"AdGroups": [
  {
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
  },
  {
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"
  }
]

Now that we have all the groups and associated configuration setup, we can wire up the .Net Core web application to start using the groups from the claims to enable/disable features. Using the Policy-based authorization capabilities of .Net core application we can wire up policies for all the groups we have.

Role-based authorization and claims-based authorization use a requirement, a requirement handler, and a pre-configured policy. These building blocks support the expression of authorization evaluations in code. The result is a richer, reusable, testable authorization structure.

We have an IsMemberOfGroupRequirement class to represent the requirement for all the groups, the IsMemberOfGroupHandler that implements how to validate a group requirement. The Handler reads the current user’s claims and checks it contains the objectId associated with the Group as a claim. If a match is found the requirement check is marked as a success. Since we want the request to continue to match for any other group requirements the requirement is not failed explicitly.

public class IsMemberOfGroupRequirement : IAuthorizationRequirement
{
    public readonly string GroupId ;
    public readonly string GroupName ;

    public IsMemberOfGroupRequirement(string groupName, string groupId)
    {
        GroupName = groupName;
        GroupId = groupId;
    }
}

public class IsMemberOfGroupHandler : AuthorizationHandler<IsMemberOfGroupRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, IsMemberOfGroupRequirement requirement)
    {
        var groupClaim = context.User.Claims
             .FirstOrDefault(claim => claim.Type == "groups" &&
                 claim.Value.Equals(requirement.GroupId, StringComparison.InvariantCultureIgnoreCase));

        if (groupClaim != null)
            context.Succeed(requirement);

        return Task.CompletedTask;
    }
}

Registering the policies for all the groups in the application’s configuration file and the handler can be done as below. Looping through all the groups in the config we create a policy for each with the associated GroupName. It allows us to use the GroupName as the policy name at places where we want to restrict features for users belonging to that group.

services.AddAuthorization(options =>
{
    var adGroupConfig = new List<AdGroupConfig>();
    _configuration.Bind("AdGroups", adGroupConfig);

    foreach (var adGroup in adGroupConfig)
        options.AddPolicy(
            adGroup.GroupName, 
            policy =>
                policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
});

services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();

Using the policy is now as simple as decorating your controllers with the Authorize attribute and providing the required Policy names on it as shown below.

[Authorize(Policy = "Admin")]
[ApiController]
public partial class AddUsersController : ControllerBase
{
    ....
}

Hope this helps you to setup Role-based functionality for your ASP.Net Core applications using Azure AD as authentication/authorization provider.

← Previous Posts