Query Object Pattern and Entity Framework - Making Readable Queries

    Using a Query Object to contain large query criteria and iterating over the query to make it more readable.

    Search is a common requirement for most of the applications that we build today. Searching for data often includes multiple fields, data types, and data from multiple tables (especially when using a relational database). I was recently building a Search page which involved searching for Orders - users needed the ability to search by different criteria such as the employee who created the order, orders for a customer, orders between particular dates, order status, an address of delivery. Order criteria are optional, and they allow to narrow down on your search with additional parameters. We were building an API endpoint to query this data based on the parameters using EF Core backed by Azure SQL.

    In this post, we go through the code iterations that I made to improve on the readability of the query and keep it contained in a single place. The intention is to create a Query Object like structure that contains all query logic and keep it centralized and readable.

    A Query Object is an interpreter [Gang of Four], that is, a structure of objects that can form itself into a SQL query. You can create this query by referring to classes and fields rather than tables and columns. In this way, those who write the queries can do so independently of the database schema, and changes to the schema can be localized in a single place.

    // Query Object capturing the Search Criteria
    public class OrderSummaryQuery
        public int? CustomerId { get; set; }
        public DateRange DateRange { get; set; }
        public string Employee { get; set; }
        public string Address { get; set;}
        public OrderStatus OrderStatus { get; set; }

    I have removed the final projection in all the queries below to keep the code to a minimum. We will go through all the iterations to make the code more readable, keeping the generated SQL query efficient as possible.

    Iteration 1 - Crude Form

    Let’s start with the crudest form of the query stating all possible combinations of the query. Since all properties are nullable, check if a value exists before using it in the query.

    (from order in _context.Order
    join od in _context.OrderDelivery on order.Id equals od.OrderId
    join customer in _context.Customer on order.CustomerId equals customer.Id
    where order.Status == OrderStatus.Quote &&
          order.Active == true &&
          (query.Employee == null || 
          (order.CreatedBy == query.Employee || customer.Employee == query.Employee)) &&
          (!query.CustomerId.HasValue ||
          customer.Id == query.CustomerId.Value) &&
          (query.DateRange == null || 
          order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate))

    Iteration 2 - Separating into Multiple Lines

    With all those explicit AND (&&) clauses the query is hard to understand and keep up. Splitting them into multiple where clauses make it more cleaner and keep each search criteria independent. The end SQL query that gets generated remains the same in this case.

    Aesthetics of code is as important as the code you write. Aligning is an important part that contributes to the overall aesthetics of code.

    from order in _context.Order
    join od in _context.OrderDelivery on order.Id equals od.OrderId
    join customer in _context.Customer on order.CustomerId equals customer.Id
    where order.Status == orderStatus && order.Active == true
    where query.Employee == null ||
          order.CreatedBy == query.Employee || customer.Employee == query.Employee
    where !query.CustomerId.HasValue || customer.Id == query.CustomerId.Value
    where query.DateRange == null ||
          (order.Created >= query.DateRange.StartDate && order.Created <= query.DateRange.EndDate)

    Iteration 3 - Refactor to Expressions

    Now that each criterion is independently visible let’s make each of the where clause more readable. Refactoring them into C# class functions makes the generated SQL inefficient, as EF cannot transform C# functions into SQL. Such conditions in a standard C# function gets evaluated on the client site, after retrieving all data from the server. Depending on the size of your data, this is something you need to be aware of.

    However, if you use Expressions those get transformed to evaluate on the server. Since all of the conditions on our where clauses can be represented as an Expression, let’s move those to the Query object class as properties returning Expressions. Since we need data from multiple tables, the intermediate projection OrderSummaryQueryResult helps to work with data from the multiple tables. All our expressions take the OrderSummaryQueryResult projection and perform the appropriate conditions on them.

    public class OrderSummaryQuery
        public Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser
                return (a) => Employee == null ||
                          a.Order.CreatedBy == Employee || a.Customer.Employee == Employee;
        public Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
        public Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
        public Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
    (from order in _context.Order
     join od in _context.OrderDelivery on order.Id equals od.OrderId
     join customer in _context.Customer on order.CustomerId equals customer.Id
     select new OrderSummaryQueryResult() 
        { Customer = customer, Order =    order, OrderDelivery = od })
    -- Generated SQL when order status and employee name is set
    SELECT [customer].[Name] AS [Customer], [order].[OrderNumber] AS [Number],
           [od].[Address], [order].[Created] AS [CreatedDate]
    FROM [Order] AS [order]
    INNER JOIN [OrderDelivery] AS [od] ON [order].[Id] = [od].[OrderId]
    INNER JOIN [Customer] AS [customer] ON [order].[CustomerId] = [customer].[Id]
    WHERE (([order].[Active] = 1) AND ([order].[Status] = @__OrderStatus_0)) AND 
          (([order].[CreatedBy] = @__employee_1) OR ([customer].[Employee] = @__employee_2))
    If you use constructor initialization for intermediate projection, *OrderSummaryQueryResult* the where clauses gets executed on the client side. So use the object initializer syntax to create the intermediate projection.

    Iteration 4 - Refactoring to Extension method

    After the last iteration, we have a query that is easy to read and understand. We also have all queries consolidated within the query object, and it acts as a one place holding all the queries. However, something still felt not right, and I had a quick chat with my friend Bappi, and we refined it further. The above query has too many where clauses and it was just repeating for each of the filters. To encapsulate this further, I moved all the filter expressions to be returned as an Enumerable and wrote an extension method, ApplyAllFilters, to execute them all.

    // Expose one property for all the filters 
    public class OrderSummaryQuery
        public IEnumerable<Expression<Func<OrderSummaryQueryResult, bool>>> AllFilters
                yield return IsActiveOrderStatus;
                yield return BelongsToUser;
                yield return BelongsToCustomer;
                yield return FallsInDateRange;
        private Expression<Func<OrderSummaryQueryResult, bool>> BelongsToUser...
        private Expression<Func<OrderSummaryQueryResult, bool>> IsActiveOrder...
        private Expression<Func<OrderSummaryQueryResult, bool>> ForCustomer...
        private Expression<Func<OrderSummaryQueryResult, bool>> InDateRange...
    // Extension Method on IQueryable
        public static IQueryable<T> ApplyAllFilters<T>(
            this IQueryable<T> queryable,
            IEnumerable<Expression<Func<T, bool>>> filters)
            foreach (var filter in filters)
                queryable = queryable.Where(filter);
            return queryable;
        (from order in _context.Order
        join od in orderDeliveries on order.Id equals od.OrderId
        join customer in _context.Customer on order.CustomerId equals customer.Id
        select new OrderSummaryQueryResult() { Customer = customer, Order = order, OrderDelivery = od })

    The search query is much more readable than what we started with in Iteration 1. One thing you should always be careful about with EF is making sure that the generated SQL is optimized and you are across what gets executed on the server and the client. Using a SQL Profiler or configure logging to see the generated SQL. You can also configure to throw an exception (in your development environment) for client evaluation.

    Hope this helps to write cleaner and readable queries. Sound off in the comments if you have thoughts on refining this further or of any other patterns that you use.

    Exclude Certain Scripts From Transaction When Using DbUp

    Certain commands cannot run under a transaction. See how you can exclude them while still keeping your rest of the scripts under transaction.

    Recently I had written about Setting Up DbUP in Azure Pipelines at one of my clients. We had all our scripts run under Transaction Per Script mode and was all working fine until we had to deploy some SQL scripts that cannot be run under a transaction. So now I have a bunch of SQL script files that can be run under a transaction and some (like the ones below - Full-Text Search) that cannot be run under a transaction. By default, if you run this using DbUp under a transaction you get the error message CREATE FULLTEXT CATALOG statement cannot be used inside a user transaction and this is an existing issue.

    ON  [dbo].[Products] ([Description])
    KEY INDEX [PK_Products] ON MyCatalog

    One option would be to turn off transaction all together using builder.WithoutTransaction() (default transaction setting) and everything would work as usual. But in case you want each of your scripts to be run under a transaction you can choose either of the options below.

    Using Pre-Processors to Modify Script Before Execution

    Script Pre-Processors are an extensibility hook into DbUp and allows you to modify a script before it gets executed. So we can wrap each SQL script with a transaction before it gets executed. In this case, you have to configure your builder to run WithoutTransaction and modify each script file before execution and explicitly wrap with a transaction if required. Writing a custom pre-processor is quickly done by implementing the IScriptPreprocessor interface, and you get the contents of the script file to modify. In this case, all I do is check whether the text contains ‘CREATE FULLTEXT’ and wrap with a transaction if it does not. You could use file-name conventions or any other rules of your choice to perform the check and conditionally wrap with a transaction.

    public class ConditionallyApplyTransactionPreprocessor : IScriptPreprocessor
        public string Process(string contents)
            if (!contents.Contains("CREATE FULLTEXT", StringComparison.InvariantCultureIgnoreCase))
                var modified =
                return modified;
                return contents;

    Using Multiple UpgradeEngine to Deploy Scripts

    If you are not particularly fine with tweaking the pre-processing step and want to use the default implementations of DbUp and still achieve keep transactions for you scripts where possible, you can use multiple upgraders to perform the job for you. Iterate over all your script files and then partition them into batches of files that need to be run under a transaction and those that can’t be run under a transaction. As shown in the image below you will end up with multiple batches with alternating sets of transaction/non-transaction set of scripts. When performing the upgrade over a batch, set the WithTransactionPerScript on the builder conditionally. If any of the batches fail, you can terminate the database upgrade.

    Script file batches

        Func<string,bool> canRunUnderTransaction = (fileName) => !fileName.Contains("FullText");
        Func<List<string>, string, bool> belongsToCurrentBatch = (batch, file) =>
    		batch != null &&
            canRunUnderTransaction(batch.First()) == canRunUnderTransaction(file);
        var batches = allScriptFiles.Aggregate
            (new List<List<string>>(), (current, next) =>
                    if (belongsToCurrentBatch(current.LastOrDefault(),next))
                        current.Add(new List<string>() { next });
                    return current;
        foreach (var batch in batches)
            var includeTransaction = !batch.Any(canRunUnderTransaction);
            var result = PerformUpgrade(batch.ToSqlScriptArray(), includeTransaction);
            if (!result.Successful)
                Console.ForegroundColor = ConsoleColor.Red;
                return -1;
        Console.ForegroundColor = ConsoleColor.Green;
        return 0;
    private static DatabaseUpgradeResult PerformUpgrade(
        SqlScript[] scripts,
        bool includeTransaction)
        var builder = DeployChanges.To
        if (includeTransaction)
            builder = builder.WithTransactionPerScript();
          var upgrader = builder.Build();
        var result = upgrader.PerformUpgrade();
        return result;

    Keeping all your scripts in a single place and automating it through the build-release pipeline is something you need to strive for. Hope this helps you to continue using DbUp even if you want to execute scripts that are a mix of transactional and non-transactional.

    .Net Core Web App and Azure AD Security Groups Role based access

    Use Azure AD groups to enable/disable functionality for your users based on their Roles.

    Getting your application to provide capabilities based on the role of the User using the system is a common thing. When using Azure Active Directory (AD), the Security Groups feature allows organizing users of your system into different roles. In the applications that we build, the group information can be used to enable/disable functionality. For, e.g., if your application has the functionality to add new users you might want to restrict this to only users belonging to the administrator role.

    Adding new groups can be done using the Azure portal. Select Group Type, Security as it is intended to provide permissions based on roles.

    Azure AD Add Group

    For the Groups to be returned as part of the claims, the groupMembershipClaims property in application manifest needs to be updated. Setting it to SecurityGroup will return all SecurityGroups of the user.

        "groupMembershipClaims": "SecurityGroup"

    For each group created an ObjectId is assigned to it which is what gets returned as part of the claims. You can either add it as part of your applications config file or use Microsoft Graph API to query the list of groups at runtime. Here I have chosen to keep it as part of the config file.

    "AdGroups": [
        "GroupName": "Admin",
        "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
        "GroupName": "Employee",
        "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"

    Now that we have all the groups and associated configuration setup, we can wire up the .Net Core web application to start using the groups from the claims to enable/disable features. Using the Policy-based authorization capabilities of .Net core application we can wire up policies for all the groups we have.

    Role-based authorization and claims-based authorization use a requirement, a requirement handler, and a pre-configured policy. These building blocks support the expression of authorization evaluations in code. The result is a richer, reusable, testable authorization structure.

    We have an IsMemberOfGroupRequirement class to represent the requirement for all the groups, the IsMemberOfGroupHandler that implements how to validate a group requirement. The Handler reads the current user’s claims and checks it contains the objectId associated with the Group as a claim. If a match is found the requirement check is marked as a success. Since we want the request to continue to match for any other group requirements the requirement is not failed explicitly.

    public class IsMemberOfGroupRequirement : IAuthorizationRequirement
        public readonly string GroupId ;
        public readonly string GroupName ;
        public IsMemberOfGroupRequirement(string groupName, string groupId)
            GroupName = groupName;
            GroupId = groupId;
    public class IsMemberOfGroupHandler : AuthorizationHandler<IsMemberOfGroupRequirement>
        protected override Task HandleRequirementAsync(
            AuthorizationHandlerContext context, IsMemberOfGroupRequirement requirement)
            var groupClaim = context.User.Claims
                 .FirstOrDefault(claim => claim.Type == "groups" &&
                     claim.Value.Equals(requirement.GroupId, StringComparison.InvariantCultureIgnoreCase));
            if (groupClaim != null)
            return Task.CompletedTask;

    Registering the policies for all the groups in the application’s configuration file and the handler can be done as below. Looping through all the groups in the config we create a policy for each with the associated GroupName. It allows us to use the GroupName as the policy name at places where we want to restrict features for users belonging to that group.

    services.AddAuthorization(options =>
        var adGroupConfig = new List<AdGroupConfig>();
        _configuration.Bind("AdGroups", adGroupConfig);
        foreach (var adGroup in adGroupConfig)
                policy =>
                    policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
    services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();

    Using the policy is now as simple as decorating your controllers with the Authorize attribute and providing the required Policy names on it as shown below.

    [Authorize(Policy = "Admin")]
    public partial class AddUsersController : ControllerBase

    Hope this helps you to setup Role-based functionality for your ASP.Net Core applications using Azure AD as authentication/authorization provider.

    Azure AD Custom Attributes and Optional Claims from an ASP.Net Application

    Adding and retrieving custom attributes from an Azure AD

    When using Azure Active Directory for managing your users, it is a common requirement to add additional attributes to your Users like SkypeId, employee code, EmployeeId and similar. Even though this happens to be a common need, getting this done is not that straightforward. This post describes how you can get additional properties on User objects in Azure AD.

    Recently when I had to do this at a client, we had users in Azure AD, the additional property, employeeCode for the user was available in an internal application which had the users Azure email-address mapped to it. We needed these to be synced across to the user Azure AD and make it available as part of claims for a Web site that uses Azure AD authentication

    Adding Custom Attribute using Directory Schema Extensions

    Azure AD user has a set of default properties, manageable through the Azure Portal. Any additional property to User gets added as an extension to the current user Schema. To add a new property we first need to register an extension. Adding a new extension can be done using the GraphExplorer website. You need to specify the appropriate directory name (e.g., contoso.onmicrosoft.com) and the applicationObjectId. The application object id is the Object Id of the AD application that the Web Application uses to authenticate with Azure AD.

    Azure AD supports a similar type of extension, known as directory schema extensions, on a few directory object resources. Although you have to use the Azure AD Graph API to create and manage the definitions of directory schema extensions, you can use the Microsoft Graph API to add, get, update and delete data in the properties of these extensions.

    POST https://graph.windows.net/contoso.onmicrosoft.com/applications/
        <applicationObjectId>/extensionProperties?api-version=1.5 HTTP/1.1
        "name": "employeeCode<optionalEnvironmentName>",
        "dataType": "String",
        "targetObjects": [

    The response gives back the fully-qualified extension property name, which is used to write values to the property. Usually the name is of the format extension_<adApplicationIdWithoutDashes>_extensionPropertyName

    If you have multiple environments (like Dev, Test, UAT, Prod) all pointing to the same Active Directory, it is a good idea to append the environment name to the extension property. It avoids any bad data issues between environments as all these properties get written to the same User object. You can automate the above step using any scripting language of your choice if required.

    Setting Values for Custom Attributes

    Now that we have the extension property created on the AD application, we can set the property on the User object. If you want to set this manually, you can use the GraphExplorer website again to do this.

    PATCH https://graph.windows.net/contoso.onmicrosoft.com/users
        "extension_ab603c56068041afb2f6832e2a17e237_employeeCode<optionalEnvironmentName>": "EMP124"

    In our case it was not a one-off case of updating the User object, so better wanted this to be automated. Employee codes were available from a database with the associated Azure AD email address. So we created a windows service job that would sync these codes to Azure AD. You can write to Azure AD schema extension properties using Microsoft Graph API. Add a reference to the Microsoft Graph NuGet package, and you are all set to go. For the Graph API to authenticate, use a different Azure AD app (separate to the one that you registered the extension property on, which the web app uses to authenticate), just because it needs additional permissions as well and it is a good idea to isolate that. Under Settings -> Required Permissions, Add Microsoft Graph and provide the relevant permissions for it to write the user’s profile/directory data.

    Azure AD Graph API Permissions

    private static async Task<GraphServiceClient> GetGraphApiClient()
        var clientId = ConfigurationManager.AppSettings["AppId"];
        var secret = ConfigurationManager.AppSettings["Secret"];
        var domain = ConfigurationManager.AppSettings["Domain"];
        var credentials = new ClientCredential(clientId, secret);
        var authContext =
            new AuthenticationContext($"https://login.microsoftonline.com/{domain}/");
        var token = await authContext
            .AcquireTokenAsync("https://graph.microsoft.com/", credentials);
        var graphServiceClient = new GraphServiceClient(new DelegateAuthenticationProvider((requestMessage) =>
                .Authorization = new AuthenticationHeaderValue("bearer", token.AccessToken);
            return Task.CompletedTask;
        return graphServiceClient;
    private async Task UpdateEmployeeCode(
        string employeeCodePropertyName, GraphServiceClient graphApiClient, Employee employee)
        var dictionary = new Dictionary<string, object>();
        dictionary.Add(employeeCodePropertyName, employee.Code);
        await graphApiClient.Users[employee.EmailAddress]
            .UpdateAsync(new User()
                AdditionalData = dictionary

    Looping through all the employee codes, you can update all of them into Azure AD at regular intervals. To verify that the attributes are updated correctly, you can either use the Graph API client to read the extension property or use the Graph Explorer Website.

    Accessing Custom Attributes through Claims

    With the Azure AD updated with the employee code for each user, we can now set up the AD application to return the additional property as part of the claims, when the web application authenticates with it. The application manifest of the Azure AD application needs to be modified to return the extension property as part of the claims. By default optionalClaims property is set to null and you can update it with the below values.

    Azure AD Application Manifest - Optional Claims

    "optionalClaims": {
        "idToken": [
            "name": "extension_<id>_employeeCodeLocal",
            "source": "user",
            "essential": true,
            "additionalProperties": []
        "accessToken": [],
        "saml2Token": []

    I updated the idToken property as the .Net Core Web Application was using JWT ID token. If you are unsure of what token you can use Fiddler to find what kind of token is used (as shown below).

    Id token returned

    With the optonalClaims set, the web application is all set to go. For an authenticated user (with the extension property set), the extension property is available as part of claims. The claim type will be ‘extn.employeeCode’. The below code can be used to extract the employee code from the claim.

    public static string GetEmployeeCode(this ClaimsPrincipal claimsPrincipal)
        if (claimsPrincipal == null || claimsPrincipal.Claims == null)
            return null;
        var empCodeClaim = claimsPrincipal.Claims
            .FirstOrDefault(claim => claim.Type.StartsWith("extn.employeeCode"));
        return empCodeClaim?.Value;
    Usually, the claims start flowing through immediately. However, once it did happen to me that the claims did not come for over a long period. Not sure what I did wrong, but once I deleted and recreated the AD application, it started working fine.

    Although setting additional properties on Azure AD Users is a common requirement, setting it up is not that straight-forward. Hope the portal improves someday, and it would be as easy as setting a list of key-value properties as extension properties, and it would all seamlessly flow through as part of the claims. However, till that day, hope this helps you to set up extra information on your Azure AD users.

    Setting up DbUp in Azure Pipelines

    DbUp in a .Net core console application and Azure Pipelines.

    Azure Pipelines is part of the Azure Devops offerings which enables you to continuously build test and deploy to any platform and cloud environments. It’s been a while since this has been out and it’s only recently that I have got a chance to play around with it at one of my clients. We use DBUp, a .Net library to deploy schema changes to our SQL Server database. It tracks which SQL scripts have been run already, and runs the change scripts that are needed to get your database up to date.

    Setting up DbUp is very easy, and you can use the script straight from the docs to get started. If you are using .Net core console application VS template to setup DbUp make sure to modify the return type of the main function to use int and to return the appropriate application exit codes (as from the script in the doc.) I made the mistake of removing the return statements, only to later realize that build scripts were successfully passing even though the DbUp scripts were failing.

    If you are using the .Net Core console application VS template (like I did) make sure you modify the return type of the main function in Program.cs to int.

    In Azure Pipelines I have the build step publish the build output as a zip artifact. Using this in the release pipeline is a 2 step process

    1 - Extract Zip Package

    Using the Extract Files Task extract the zip package from the build artifacts. You can specify a destination folder for the files to be extracted to (as shown below).

    Extract package

    2 - Execute DbUp Package

    With the package extracted out into a folder, we can now execute the console application (using the dotnet command line) by passing in the connection string as a command line argument.

    Execute package

    You now have your database deployments automated through the Azure Pipelines.

    With Azure Pipelines you can continuosly build, test and deploy to any cloud platform. Azure Pipelines has multiple options to start based on your project. Even if you are developing a private application, Pipelines offers you 1 Free parallel job with upto 1800 minutes per month and also 1 Free self hosted with unlimited months (as it’s anyway running on your infrastructure).

    On the Microsoft-hosted CI/CD with 1800 minutes you might need to find the used/remaining time any time during the month. You can find the remaining minutes from the Azure Devops portal and select the relevant organization.

    Organization settings -> Retention and parallel jobs -> Parallel Jobs

    Azure Devops Pipelines - Remaining Build Minutes

    Hope that helps you find the remaining free build minutes for your organization!

    At times you might be working in environments where there are a lot of restrictions on the tools that you can use, the process that you need to follow, etc. Under these circumstances, it is essential that we stick to some core and fundamental principles and practices that we as an industry have adopted. We need to make sure that we have that in place no matter what the restrictions imposed are. Below are a few of the restrictions that I along with my team had to face at one of the clients and what we did to keep ourselves on top of it and still deliver at a higher speed.

    Working under Constraints.

    The issues discussed might or might not immediately relate to you, the important thing is your attitude towards such issues and finding ways around your constraints, keeping yourself productive on the long run.

    No Build Deploy Pipeline

    When I joined the project, it amazed that we were still building/packaging the application from a local developer system and manually deploying this to the various environments (Dev, Test, UAT, and PROD).

    Whenever a release was to be made one of the developers was to pause his current work, switch to the appropriate branch for the release, make sure he had the latest code base, build with the correct configuration to generate a package.

    This might sound an outdated practice (as it did to me) but here I am at a client, in the year 2018 and it’s still happening. What surprised me, even more, was that the team did have access to an Octopus server (backed by a Jenkins build server) but since the deployment server did not have access to UAT/PROD servers they chose not to use it. You bet this was the first thing I was keen on fixing, as generating a release package from my local system would be the last thing that I want to do.

    After a quick chat with the team, we decided on the below.

    • Set up build/deploy pipeline up to Test environment. This would allow seamless integration while we are developing features and get it out for testing. Since we had access till the Test environment, this was hardly an hours work to get it all working.

    • Since we did not have access to UAT/PROD and the process required us to hand over a deployment package to the concerned team, we set up a ‘Packaging Project’ in Octopus. This project basically unzips the selected build package into our Dev environment server, applies the configuration transforms and zips up the folder into a deployment package. With this, we are now able to create a deployment package for any given build and for any environment. We are also having discussions to enable access to UAT/PROD servers for the deployment servers so that we can deploy automatically, all the way to production.

    No longer was the process dependent on a developer or a developer machine and was completely automated. For those reading this and in a similar situation but who does not have access to a build/deploy system like Jenkins/Octopus I would basically set up a simple script to pull down the source given a commit hash/branch/tfs label and perform a build and package independent of the working directory of the developer. This script could run on a shared server (if you have access to one) or worst on a developer’s machine/VM. The fundamental thing that we are trying to achieve is to decouple it from the current working folder on a developer machine and the manual steps involved in generating a package. As long as you have an automated way to create a package irrespective of what tools/systems you use we should be safe and sound.

    Out of Sync SQL and Code Artifacts

    The application is heavily dependent on Stored Procedures for pulling/pushing data out of the SQL Server Database. Yes, you heard it right, Stored Procedures and ones with business logic in them, which is what makes it actually worse. Looking at how the Stored Procedures were maintained I could see that the team started off with good intentions using DbUp but soon moved away from it. When I joined the process was to share SQL artifacts as attachments in the Jira story/bug. The Db Administrator (DBA) would then pull that out and manage them separately in a source control repository that was not the same as the application code base.

    There was not much information on why this was the case, but the primary reason that they moved away from DbUp was that there was no visibility of the SQL scripts when running updates as it was an executable file that was the output of the DbUp project. Also, there were poor development/deployment practices that led to the ad-hoc execution of scripts in environments without actually updating the source control. This soon put the DBA out of control, and the only way to gain control back was to maintain it separately.

    Again we decided to have a quick chat with the team along with the DBA on how to improve the current process, as it was getting harder to track application package versions and the associated scripts to go with that package.

    • DbUp by default embeds the SQL artifacts into the executable which removes all visibility into the actual scripts. However, this behaviour is configurable using ScriptProviders. By using the FileSystemScriptProvider, we can specify the folder from which to load the SQL scripts. Configuring the msbuild to copy out all the folder files to the output and including them into the final package was an easy change. This provided the DBA with the actual SQL artifacts, and he could review them quickly. We also started a code review process and began including the DBA for any changes related to SQL artifacts. This gave even more visibility to the DBA and helped catch issues right at the time of development.

    • With automated build/deploy till Test environment in place, we no longer had to make ad-hoc changes to the databases, and everything was pushed through the source control as it was faster and more comfortable.

    With this few tweaks we were now in a much better state, and there was a one to one trackability between source code and SQL artifacts. It all lived as part of one package, traceable all the way to the source code commit tag auto-generated by the build system.

    Not Invented Here Syndrome

    With the kind of restrictions you have seen till now, you can guess the approach towards third party services and off-the-shelf products. Most of the things are still done in-house (including trying to replicate a service bus). The problem with this approach is that there is a limit as to which you can go doing that after which you probably lose out on your team or the code takes over you where you just cannot maintain it. When starting out on a new project and when the code base is still small building your own mechanisms might seem to work well. But once past that point, you no longer want to continue down the path but invest in industry proven tools. These include logging servers, service-bus/queues (if you need one), email services (especially if you want to track and do statistics on top of the emails send out).

    Biggest challenge introducing this is mostly not cost related (as there are a lot of really affordable services for every business)it’s mostly the fear of unknown and lack of interest to venture out into unfamiliar territory. The reasons might vary for you, but try to understand the core reason that is hindering the change.

    One technique that worked to get over the fear of the unknown was to introduce this slowly into the system, one at a time, giving enough time for people to get used to the change.

    Seq was one of the first things that we had proposed and long waiting in the wish list. The team was using Serilog to log, and all the logs were stored in SQL table making it really hard to query and monitor the logs. The infrastructure team did not want to install Seq as it was all new to them and were not sure about the additional task for managing a Seq instance. So we suggested to them to just have it on the development server and to get familiar with the application first. After a couple of days, the business was seeing a benefit with increased visibility into the logs and infrastructure team was also happy with the increased visibility into the logs. Even before a week, they were happy to install one for the test environment as well. At the time of writing we are looking at getting a Seq instance on the UAT server and soon to have a production instance as well. Getting the interested stakeholders to have a feel of the application and slowly introducing the change is a great way to get buy-in.

    Now we are trying to push for a service bus!

    Build Server without NuGet access

    The build server we were using was in-house hosted, and the box that it ran did not have internet access. This meant that you cannot have any external package dependencies that can be pulled at build time. We chose to include the package references along with the source code, which is anyways what I tend to prefer more. All our third-party libraries were pushed along with the source code repository, so the build machine had all the required dependencies and did not have to have internet connectivity to make a build.

    Those are just a subset of the issues that we had to run into and bet you there were so many smaller ones. At times problems are not technical in nature, but more about communication and how effectively you are able to get all the people involved to get along with each other.

    Any journey to advancement is about valuing the people around you, understanding them and taking them along with the change. It’s a journey that the team needs to make together and not a solo one.

    Different people have different experiences, pain points, concerns and targets to check off. So as a team you need to understand what works for everyone and come to a collective agreement. Just getting all of the concerned parties into a room and having a healthy discussion (mainly by not being prescriptive but descriptive of the issues that you are facing) solves most of the problems.

    Do you work in a similar environment? What challenges do you face at work? Sound off in the comments!

    Subresource Integrity (SRI)

    Enable browsers to verify files they fetch (for example, from a CDN) are delivered without manipulation.

    This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use Subresource Integrity and the issues it solves.

    Subresource Integrity (SRI) is a security feature that enables browsers to verify that files they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched file must match.

    Subresource integrity

    Using the integrity attribute on script and link element enables browsers to verify externally linked files before loading them. The integrity attribute takes a base64-encoded hash prefixed the corresponding hash algorithm prefix(at present sha256,sha3384, sha512), as shown in the example below.

      crossorigin="anonymous" />

    Generating SRI Hash

    To generate the SRI hash for files that are accessible over a URL, you can use srihash.org or srigenerator depending on what hash algorithm version you want. If you’re going to generate it on your local files, you can use OpenSSL command-line tool (which should be part of your git bash shell if you are looking around for it, like I did)

    openssl dgst -sha256 -binary FILENAME.js | openssl base64 -A

    Third-Party Libraries

    For third-party libraries (js and CSS) referred via CDN, you can grab the script/link element along with the integrity attribute from the CDN sites. Here is an example below from cdnjs.

    Generate script tag along with SRI Hash

    When referring third party libraries via CDN its good to fall back to a local copy. In cases where the CDN is unreachable or the integrity check fails it can fall back to a local copy. I chose to include the integrity attribute on the fallback copy as well.

        window.jQuery || 
        document.write('<script src="{{ root_url }}/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')

    Application Specific Files

    For application specific javascript files, you need to generate the hash everytime you modify it. You could look at integrating this with your build pipeline to make it seamless. You can use the OpenSSL command line tool as shown above to generate the hash during your application build process.

    Inline JavaScript

    The integrity attribute must not be specified when embedding a module script or when the src attribute is not specified. This means that SRI cannot be used for inline javascript. Even though inline javascript should be avoided, there still are scenarios where you might use that or have dynamically generated javascript. In these cases, we can use nonce attribute on script tag and whitelist that nonce in the CSP Headers.

    A whitelist for specific inline scripts using a cryptographic nonce (number used once). The server must generate a unique nonce value each time it transmits a policy. It is critical to provide an unguessable nonce, as bypassing a resource’s policy is otherwise trivial. See unsafe inline script for example. Specifying nonce makes a modern browser ignore ‘unsafe-inline’ which could still be set for older browsers without nonce support.

    For the jquery fallback above we need a nonce attribute since this is loaded inline.

    <script nonce="anF1ZXJ5ZmFsbGJhY2s=">
        window.jQuery || 
        document.write('<script src="{{ root_url }}/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')

    We can then specify this nonce on the CSP headers for the script-src. The nonce value can be anything that is base64 encoded.

    <!-- Web.config CSP header -->
      value="default-src 'self';script-src c.disquscdn.com 'self' 'nonce-anF1ZXJ5ZmFsbGJhY2s=' 'nonce-ZGlzcXVzc2NyaXB0'; />

    Using nonce allows us to get away with having an inline script. However, this should be avoided if possible. As you may have noticed, by having a nonce on the attribute does not validate the script contents of the associated tag. It executes anything that is within that tag. So if you have dynamic content within the script block, this can be used to your disadvantage by attackers. So use it only if it’s absolutely necessary. However, having the nonce attribute for those cases is better so that you can limit inline javascript to those specific script tags.

    Browser Support

    Check if your browser supports Subresource Integrity. Compared to a while back most of the browsers now support SRI.

    SRI Browser Support

    Using SRI, we can make sure that the dependencies that we have are loaded are as expected and not modified in flight or at source by a malicious attacker. There is always a risk that you need to be willing to take when including external dependencies as they could be already having a threat embedded at the time of hash generation. For popular libraries, this is less likely. For those unpopular ones, it’s always a good idea to take a quick look at the code to ensure it’s not malicious. Using some tools to assist you with this is also a good idea, which we will look into in a separate article.

    I was setting up an API at one of the client’s place recently and found that currently, they allow any origin to hit their API by setting the CorsOptions.AllowAll options. In this post, we will look at how to set the CORS options and restrict it to only the domains that you want your API to be accessed from.

    What is Cross-Origin Resource Sharing (CORS)

    Cross-Origin Resource Sharing is a way to relax the browsers Same-Origin Policy whereby to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. By specifying the CORS header you instruct the browser to allow all allowed domains to access your resource. Most of the time for the API endpoints you want to be explicit on the hosts that can access your API. By setting CORS, you are only restricting/allowing cross-domain access originating from a browser. Setting CORS should not be mistaken for a Security feature whereby you are restricting access from any other sources. Any requests that are formed outside of the browser like using Postman, Fiddler, etc. can still make to your API and you need appropriate authorization/authentication to make sure you are not exposing data to unintended people.

    Cross-Origin Request

    Enabling in Web API

    In Web API there are multiple ways that you can set CORS.

    In the below snippet I am using the Microsoft.Owin.Cors pipeline to setup CORS for the API. The code first reads the application configuration file to get a list of semicolon (;) separated hostnames which are added to the list of allowed origins in the CorsPolicy. By setting the corsOptions with UseCors extension method, the policy gets applied to all the requests coming through the website.

    var allowedOriginsConfig = ConfigurationManager.AppSettings["origins"];
    var allowedOrigins = allowedOriginsConfig
        .Split(new[] { ";" }, StringSplitOptions.RemoveEmptyEntries);
    var corsPolicy = new CorsPolicy()
        AllowAnyHeader = true,
        AllowAnyMethod = true,
        SupportsCredentials = true
    foreach (var origin in allowedOrigins)
    var policyProvider = new CorsPolicyProvider()
        PolicyResolver = (context) => Task.FromResult(corsPolicy)
    var corsOptions = new CorsOptions()
        PolicyProvider = policyProvider

    Setting Multiple CORS Policy

    If you want to have different CORS policies based on different Controllers/route path, you can use the Map function to set up the CorsOptions for specific route paths. In the below example we apply a different CorsOptions to all routes that match ’/api/SpecificController’ and defaults to another for all other requests.

        (appbuilder) => appbuilder.UseCors(corsOptions2));

    CORS ≠ Security

    CORS is a way to relax the Cross-Origin Policy and in no way should be seen as a security feature. By setting CORS headers what we are saying is to allow all the additional domains in the headers also to be able to access the resource from a browser environment. However setting this, does not restrict access to your API’s from other sources like Postman, Fiddler or from any non-browser environments. Even within browser environments, older versions of Flash allows modifying and spoofing of request headers. Ensure that you are using CORS for the correct reasons and not assume that it is providing you security against unauthorized access.

    Hope this allows you to setup CORS on your API’s!

    HTTP Content Security Policy (CSP)

    Prevent execution of malicious content in the context of your website.

    This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

    Content Security Policy (CSP) is a security response header or a element that instructs the browser, sources of information that it should trust for our website. A browser that supports CSP’s then treats this list specified as a whitelist and only allows resources to be loaded only for those sources. CSP’s allow you to specify source locations for a variety of resource types which are referred to as fetch directives(e.g. _script-src, img-src,style-src* etc).

    Content Security Policy

    CSP is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware.

    Content-Security-Policy: default-src 'self' *.rahulpnath.com

    Setting CSP Headers

    Web Server Configuration

    CSP’s can be set via the configuration file of your web server host if you want to specify it as part of the header. In my case I use Azure Web App, so all I need to do is add in a web.config file to my root with the header values. Below is an example which specified CSP headers (including Report Only) and STS headers.

            <add name="Content-Security-Policy" value="upgrade-insecure-requests;"/>
            <add name="Content-Security-Policy-Report-Only" value="default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly" />
            <add name="Strict-Transport-Security" value="max-age=31536000; includeSubDomains; preload"/>

    Using Fiddler

    However if all you want is to play around with the CSP header and don’t have access to your Web server or the configuration file, you can still test these headers. You can inject in the headers into the response using a Web Proxy like Fiddler

    To modify the request/response in-flight you can use one of the most powerful feature in Fiddler - Fiddler Script

    Fiddler Script allows you to enhance Fiddler’s UI, add new features, and modify requests and responses “on the fly” to introduce any behavior you’d like.

    Using the below script, we can inject ‘Content-Security-Policy’ header whenever the request matches a specific criteria.

    Fiddler Script to update CSP

    // Fiddler Script - Inject CSP Header
    if (oSession.HostnameIs("rahulpnath.com")) {
      oSession.oResponse.headers["Content-Security-Policy"] =
        "default-src 'none'; img-src 'self';script-src 'self';style-src 'self'";

    By injecting these headers, we can play around with the CSP headers for the webiste without affecting other users. Once you have the CSP rules that cater to your site you can commit this to the actual website. Even with all the CSP headers set, you can additionally set the report-to (or deprecated report-uri) directive on the policy to capture any policies that you may have missed.


    The _Content-Security-PolicyReport-Only header allows to test the header settings without any impact and also to capture any CSP headers that you might have missed on your website. The browser uses this for reporting purposes only and does not enforce the policies. We can specify a report endpoint to which the browser will send any CSP violations as a JSON object.

    Below is an example of a CSP violation POST request send from the browser to the report URL that I had specified for this blog. I am using an endpoint from the Report URI service (more on this later)

    POST https://rahulpnath.report-uri.com/r/d/csp/reportOnly HTTP/1.1
        "csp-report": {
            "document-uri": "https://www.rahulpnath.com/",
            "referrer": "",
            "violated-directive": "img-src",
            "effective-directive": "img-src",
            "original-policy": "default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly",
            "disposition": "report",
            "blocked-uri": "https://www.rahulpnath.com/apple-touch-icon-120x120.png",
            "line-number": 29,
            "source-file": "https://www.rahulpnath.com/",
            "status-code": 0,
            "script-sample": ""

    Generating CSP Policies

    Coming up with the CSP policies for your site can be a bit tricky as there are a lot of options and directives involved. Your site might also be pulling in dependencies from a variety of sources. Setting CSP policies is also an excellent time to review your application dependencies and manage them correctly. For e.g., if you have a javascript file from an untrusted source, etc. There are a few ways by which you can go about generating CSP policies. Below are two ways I found useful and easy to get started.

    Using Fiddler

    The CSP Fiddler Extension is a Fiddler extension that helps you produce a strong CSP for a web page (or website). Install the extension and with Fiddler running navigate to your web pages using a browser that supports CSP.

    The extension adds mock Content-Security-Policy-Report-Only headers to servers’ responses and uses the report-uri https://fiddlercsp.deletethis.net/unsafe-inline. The extension then listens to the specified report-uri and generates a CSP based on the gathered information

    Fiddler CSP Rule Collector

    Using Report URI

    ReportURI is a real-time security reporting tool which can be used to collect various metrics about your website. One of the features it provides is giving a nice little wizard interface for creating your CSP headers. Pricing is usage based and provides the first 10000 reports of the month free (which is what I am using for this blog).

    ReportURI gives a dashboard summarizing the various stats of your site and also provides features to explore these in detail.

    Report Uri Dashboard

    One of the cool features is the CSP Wizard which as the name suggests, provides a wizard-like UI to build out CSP’s for the site. The websites need to be configured to report CSP errors to a specific endpoint on your ReportURI endpoint (as shown below). The header value can be set either on CSP header or the Report Only header.

    You can find your report URL from the Setup tab on Report URI. Make sure you use the URL under the options Report Type: CSP and Report Disposition: Wizard

    Content-Security-Policy-Report-Only: default-src 'none';report-uri https://<subdomain>.report-uri.com/r/d/csp/wizard

    Once all configured and reports start coming in you can use the Wizard to pick and choose what sources you need to whitelist for your website. You might see a lot of unwanted sources and entries in the wizard as it just reflects what is reported to it. You need to filter it out manually and build the list.

    Once you have the CSP’s set you can check out if your site does the Harlem Shake by pressing F12 and running the below script. Though this is not any sort of test, it is a fun exercise to do.

    Copy pasting scripts from unknown source is not at all recommended and is one of the most powerful ways that an attacker can get access to your account. Having a well defined CSP prevents such script attacks as well on your sites. Don't be suprised if your banking site also shakes to the tune of the script below.

    That said do give the below script a try! I did go through the code pasted below and it is not malicious. All it does modify your dom elements and plays a music. The original source is available below but I do not control it and it could have change since the time of writing.

    // Harlem Shake - F12 on Browser tab and 
    // run below script (Check your Volume)
    //Source: http://pastebin.com/aJna4paJ
    javascript:(function(){function c(){var e=document.createElement("link");e.setAttribute("type","text/css");
    document.body.appendChild(e)}function h(){var e=document.getElementsByClassName(l);
    for(var t=0;t<e.length;t++){document.body.removeChild(e[t])}}function p(){var e=document.createElement("div");
    function d(e){return{height:e.offsetHeight,width:e.offsetWidth}}function v(i){var s=d(i);
    return s.height>e&&s.height<n&&s.width>t&&s.width<r}function m(e){var t=e;var n=0;
    while(!!t){n+=t.offsetTop;t=t.offsetParent}return n}function g(){var e=document.documentElement;
    if(!!window.innerWidth){return window.innerHeight}else if(e&&!isNaN(e.clientHeight)){return e.clientHeight}return 0}
    function y(){if(window.pageYOffset){return window.pageYOffset}return Math.max(document.documentElement.scrollTop,document.body.scrollTop)}
    function E(e){var t=m(e);return t>=w&&t<=b+w}function S(){var e=document.createElement("audio");e.setAttribute("class",l);
    setTimeout(function(){N();p();for(var e=0;e<O.length;e++){T(O[e])}},15500)},true);
    e.innerHTML=" <p>If you are reading this, it is because your browser does not support the audio element. We recommend that you get a new browser.</p> <p>";
    document.body.appendChild(e);e.play()}function x(e){e.className+=" "+s+" "+o}
    function T(e){e.className+=" "+s+" "+u[Math.floor(Math.random()*u.length)]}function N(){var e=document.getElementsByClassName(s);
    var t=new RegExp("\\b"+s+"\\b");for(var n=0;n<e.length;){e[n].className=e[n].className.replace(t,"")}}var e=30;var t=30;
    var n=350;var r=350;var i="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake.mp3";var s="mw-harlem_shake_me";
    var o="im_first";var u=["im_drunk","im_baked","im_trippin","im_blown"];var a="mw-strobe_light";
    var f="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake-style.css";var l="mw_added_css";var b=g();var w=y();
    var C=document.getElementsByTagName("*");var k=null;for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){if(E(A)){k=A;break}}}
    if(A===null){console.warn("Could not find a node of the right size. Please try a different page.");return}c();S();
    var O=[];for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){O.push(A)}}})()

    I am still playing around with the CSP headers for this blog and currently testing it out using the ReportOnly header along with ReportURI. Hope this helps you to start putting the correct CSP headers for your site as well!