Code Signing MSI Installer and DLLs in Azure DevOps

Code Sign using Microsoft SignTool in Azure Devops build pipeline.

Code Signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed.

Code Signing is something that you need to consider if you are distributing installable software packages that are consumed by your consumers. Some of the most obvious examples would be any of the standalone applications that you install on your machine. Code signing provides authenticity of the software package distributed and ensures that it is not tampered with between when it was created and delivered to you. Code Signing usually involves using a Code Signing Certificate which follows the public/private key pair to sign and verify. You can purchase this from all certificate issuing authorities; google should help you choose one.

At one of the recent projects, I had to set up Code Signing for the MSI installer for a windows service. The following artifacts were to be signed: Windows Service Executable, - Dependent DLLs, MSI Installer (that packages the above). We were using the Azure Devops for our build process and wanted to automate the code signing. We are using WIX to generate the MSI installer; Check out my previous post on Building Windows Service Installer on Azure Devops, if you want more details on setting up WIX installer in DevOps.

Since we need to sign the executable, the DLLs and also the installer that packages them, I do this in a two-step process in the build pipeline - first sign the DLLs and executable, after which the installer project is built and signed. It ensures the artifacts included in the installer are also signed. We self host our build agent, so I installed our Code Signing certificate on the machine manually and added the certificate thumbprint as a build variable in DevOps. For a hosted agent, you can upload the certificate as a Secure File and use it from there.

Sign DLLs

The pipeline first builds the whole project to generate all the DLLs and the service executable. Microsoft’s SignTool is used to sign the DLLs and executable. The tool takes in the certificate’s thumbprint as a parameter and also takes in a few other parameters; check the documentation to see what each what parameter does. It does accept wildcards for the files to be signed. If you follow a convention for project/DLL names (which you should), then signing them all can be done in one command.

c:\cert\signtool.exe sign /tr http://timestamp.digicert.com 
    /fd sha256 /td sha256 /sm /sha1 "$(CodeSignCertificateThumbprint)" 
    /d "My Project description"  MyProject.*.dll

Code Signing Azure DevOps

Sign Installer

Now that we have the DLLs and the executable signed, we need to package them in using the WIX project. By default, building a WIX project rebuilds all the dependent assemblies, which will overwrite the above-signed DLLs. To avoid this make sure to pass in parameters to the build command to not build project references (/p:BuildProjectReferences=false), and only package them. The MSI installer on build output can then be signed using the same tool.

Code Signing Azure DevOps

Sign Powershell Scripts

We also had a few Powershell scripts that were packaged along in a separate application. To sign them you can use the Set-AuthenticodeSignature cmdlet. All you need is to get the certificate from the appropriate store and pass it on to the cmdlet along with the files that need to be signed.

$cert = Get-Childitem cert:\LocalMachine\My   `
    | where {$_.Thumbprint -eq "$(CodeSignCertificateThumbprint)"}[0]
Set-AuthenticodeSignature -TimestampServer "http://timestamp.digicert.com" `
    .\Deploy\*.ps1 $cert 

If you are distributing packaged software to your end users to install it is generally a good idea to Code Sign your artifacts and also publish the verifiable hash on your website along with the downloadables.

I hope this helps!

Setting Up Dual 4K Monitors - Dell P2715Q and Dell U2718Q

Setting up two 4k monitors for home office.

Dual 4k

Last week I added another 27’ 4k monitor to my home office. It’s been a while since I have been thinking of having an extra monitor just for the fun of it and seeing if it has any benefits. I had a Dell P2715Q for over a year and was looking to get another one of the same models. However, looks like Dell has stopped this model and was not available at any place in Australia. The recommended alternative is Dell U2718Q, which is a 27’ 4k but with thinner bezels. I also like the color and appearance of the new monitor.

I have monitors in the below order with my laptop on the left (which I mostly leave closed for the moment), the new Dell U2718Q in the center (main working display) and the U2718Q on the right (slightly angled in).

Dual monitor layout

Surface Pro and Dual 4k

My daily work machine is a Surface Pro i7 Model 1796 which has a Mini DisplayPort for outputting the video. With just one Dell monitor connecting it to the Surface was easy and worked great at 60Hz. The only way to connect an extra display off the Surface Pro is to either Daisy Chain the Monitors or have a dock/USB device with more display ports.

The Dell P2715Q does support daisy chaining the monitor; the U2718Q does not. But since there are only two monitors we only need one of the monitors to support MST and also the graphics card on your laptop. MST needs to explicitly enabled on the monitor settings (Menu -> Display -> MST -> Primary). Check out the User’s Guide for more details, Assume this is because it puts the monitor into 30Hz as opposed to the default 60Hz. a

Daisy Chaining is straightforward - Output from the laptop goes to the input of the first (in my case P2715Q as only that supports MST), and from the video out of that connect to input on U2718Q.

Dell MST

However note that with MST turned on both the primary and the secondary monitor will be set to 30Hz at 4k resolution.

MST Modes

Off: Default mode - 4k2k 60Hz with MST function Disabled.
Primary: Set as primary mode at 4K2K 30Hz with MST (DP out) enabled.
Secondary: Set as secondary mode at 4K2K 30Hz with MST (DP out) disabled.

You can confirm the display settings from the ‘View advanced display info’ from the start menu (Windows 10). The Surface Pro runs at 60Hz while the other two monitors are running at 30Hz.

I have come across other people mentioning they have had success connecting one 4k from the surface display port and another one from the Surface dock, both running at 60Hz. However, I am not so keen on getting a dock specific to a device.

There are some options from Targus and a few other brands, but all are a bit costly. For a full blow list of options on connecting a Surface Pro with multiple monitors check out this blog post.

I have been working on 30Hz for the past one week and not finding many issues with it. Primarily my work involves text-based interfaces and not much of videos or image editing. Because of this, I don’t see much difference running on a 30Hz. But given a chance, I would like to bump up the experience to work on 60Hz, but that would mean shelling out some extra dollars and getting a dual 4k dock or switching my work machine to my MacBook Pro.

MacBook Pro (2015)

I have a MacBook Pro (2015) model that I don’t use much these days. Even though it is a bit older, it has much more ports and connectivity than the Surface Pro. It supports two Thunderbolt 2 ports both runs at 60Hz on 4k. It also has an additional HDMI port but supports 4k only at a 30Hz.

Mac book pro dual 4k

If I find much trouble with 4k on 30Hz, I will consider switching over to the MacBook, rather than getting an external dock. Overall having added in an extra monitor does help a lot. The primary monitor usually has Visual Studio or Code, and the second one has Chrome, Teams, Spotify, etc. running. I might consider adding a monitor arm at some time to make arrangement cleaner, but that’s something for later!

Custom Authorization Policy Providers in .Net Core For Checking Multiple Azure AD Security Groups

Extending Azure AD Groups Role based access to support combinations of multiple groups to grant access.

In the post, .Net Core Web App and Azure AD Security Groups Role based access, we saw how to use Azure AD Security Groups to provide Role Based Access for your .Net Core applications. We covered only cases where our Controllers/functions were provided access based on a single Azure AD Security group. At times you might want to extend this to include multiple groups, e.g., A user can edit an order if they belong to Admin OR Manager group or both Admin ANDManager group. In this post, we will see how to achieve that.

Belongs to Multiple Groups

In the previous post, we added different policy per AD Security Group and used that in the Authorize attribute to restrict access to a particular Security Group, say Admin. If you want to limit functionality to users who belong to both Admin AND Manager, you can use two attributes one after the other as shown below.

[Authorize(Policy = "Admin")]
[Authorize(Policy = "Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase
...

The above code looks for policies named ‘Admin’ and ‘Manager’, which we registered on application startup using the services.AddAuthorization call (as shown in the previous post).

Belongs to Any One Group

In cases where you want to restrict access to a controller/function depending on the user being part of at least one of the groups in a list of given groups, e.g., user is either Admin OR Manager. The natural tendency is to use a comma-separated list of values for the Policy as shown below.

[Authorize(Policy = "Admin,Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase
...

The above code looks for a policy named ‘Admin,Manager’ and for it to work you need to add in a policy named the same.

options.AddPolicy(
    "Admin,Manager",
    policy =>
        policy.AddRequirements(new IsMemberOfAnyGroupRequirement(adminGroup, managerGroup));

As you can see, I have modified the IsMemberOfGroupRequirement class from the previous blog post to IsMemberOfAnyGroupRequirement, which now takes in a list of AzureAdGroupConfig. The handler for the requirement (IsMemberOfAnyGroupHandler) is updated to check if the user claims have at least one of the required claim.

public class AzureAdGroupConfig
{
    public string GroupName { get; set; }
    public string GroupId { get; set; }
}

public class IsMemberOfAnyGroupRequirement : IAuthorizationRequirement
{
    public AzureAdGroupConfig[] AzureAdGroupConfigs { get; set; }

    public IsMemberOfAnyGroupRequirement(params AzureAdGroupConfig[] groupConfigs)
    {
        AzureAdGroupConfigs = groupConfigs;
    }
}

public class IsMemberOfAnyGroupHandler : AuthorizationHandler<IsMemberOfAnyGroupRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, IsMemberOfAnyGroupRequirement requirement)
    {
        foreach (var adGroupConfig in requirement.AzureAdGroupConfigs)
        {
            var groupClaim = context.User.Claims
            .FirstOrDefault(claim => claim.Type == "groups" &&
                claim.Value.Equals(
                    adGroupConfig.GroupId, 
                    StringComparison.InvariantCultureIgnoreCase));

            if (groupClaim != null)
            {
                context.Succeed(requirement);
                break;
            }
        }
       
        return Task.CompletedTask;
    }
}

The code now works fine, and users belonging to either Admin OR Manager can now access the AddUsersController functionality.

Custom Authorization Policy Providers

Even though the above code works fine, we had to add a policy specific to ‘Admin,Manager’ combination. These combinations can soon start to grow in a large application and become hard to maintain. You can add a custom authorization attribute along with customizing the policy retrieval to match our needs.

IsMemberOFANyGroupAttribute is a custom authorize attribute that takes in a list of group names and concatenates the names using a known prefix and a separator. The known prefix, POLICY_PREFIX helps us to identify the kind of policy that we are looking at, given just the policy name.

Make sure to choose a SEPARATOR that you know will not be there in your group names.

public class IsMemberOfAnyGroupAttribute : AuthorizeAttribute
{
    public const string POLICY_PREFIX = "IsMemberOfAnyGroup";
    public const string SEPARATOR = "_";

    private string[] _groups;

    public IsMemberOfAnyGroupAttribute(params string [] groups)
    {
        _groups = groups;
        var groupsName = string.Join(SEPARATOR, groups);
        Policy = $"{POLICY_PREFIX}{groupsName}";
    }
}

To create a custom policy provider inherit from IAuthorizationPolicyProvider or the default implementation available, DefaultAuthorizationPolicyProvider. Since I wanted to fall back to the default policies available first and then provide custom policies, I am inheriting from DefaultAuthorizationPolicyProvider. The only parameter available to us is the policyName, which is used to resolve the appropriate policy. The code first checks for any default policies available, those which are explicitly registered. I assume any policies referred to by the default Authorize attribute will have an associated Policy registered explicitly. For policies with the known prefix, POLICY_PREFIX we extract out the group names and build a new IsMemberOfAnyGroupRequirement dynamically.

public class ADGroupsPolicyProvider : DefaultAuthorizationPolicyProvider
{
    private List<AzureAdGroupConfig> _adGroupConfigs;

    public ADGroupsPolicyProvider(
        IOptions<AuthorizationOptions> options,
        List<AzureAdGroupConfig> adGroupConfigs): base(options)
    {
        _adGroupConfigs = adGroupConfigs;
    }

    public override async Task<AuthorizationPolicy> GetPolicyAsync(string policyName)
    {
        var policy = await base.GetPolicyAsync(policyName);

        if (policy == null &&
            policyName.StartsWith(
                IsMemberOfAnyGroupAttribute.POLICY_PREFIX,
                StringComparison.InvariantCultureIgnoreCase))
        {
            var groups = policyName
                .Replace(IsMemberOfAnyGroupAttribute.POLICY_PREFIX, string.Empty)
                .Split(
                    new string[] { IsMemberOfAnyGroupAttribute.SEPARATOR },
                    StringSplitOptions.RemoveEmptyEntries);

            var groupConfigs = (from groupName in groups
                                  join groupConfig in _adGroupConfigs
                                  on groupName equals groupConfig.GroupName
                                  select groupConfig)
                                 .ToArray();

           policy = new AuthorizationPolicyBuilder()
                .AddRequirements(new IsMemberOfAnyGroupRequirement(groupConfigs))
                .Build();
        }

        return policy;
    }
}

Don’t forget to register the new policy provider at Startup.

services.AddSingleton<IAuthorizationPolicyProvider, ADGroupsPolicyProvider>();

Using the new attribute is the same as before. However, you don’t need to register a policy for ‘Admin,Manager’ explicitly. When the default policy provider cannot find a policy, it will return a dynamic policy with IsMemberOfAnyGroupRequirement having the above two groups. For any of the possible group combinations, this will happen automatically now.

[IsMemberOfAnyGroupAttribute("Admin", "Manager")]
[ApiController]
public partial class AddUsersController : ControllerBase

Hope this helps you extend the Policy-based authorization in ASP.Net Core applications and mix and match with the way you want to enable access for your users.

I have been on and off Pomodoro technique and always wanted to be more consistent following it. A while back I was using Tomighty, a minimalistic Pomodoro timer. However, with Tomighty, I often forget to start the timer and soon stopped using it altogether. Recently when reading through a Productivity tips article I came across Toggl.

Toggl is the leading time tracking app for agencies, teams and small businesses. A simple time tracker with powerful reports and cross-platform functionalities.

Toggl

Pomodoro Tracker

Even though I am interested in tracking time, I am not so keen on the reports and cross-platform functionalities that Toggl provides. Especially after trying to Minimalize my Online Life I am very particular about adding a new app. The one feature that I am interested in with Toggl is the Pomodoro tracker that comes up with the desktop app. Toggl has a mini timer that can float around anywhere in your desktop and is very minimalistic. All it has is a task name with a Start/Stop button and displaying the elapsed time. Within the application settings, you can configure the Pomodoro interval and the break length. The timer stops automatically after the set duration of Pomodoro interval.

Reminder

Another feature with Toggl that I like is about having a reminder to remind you to track time/Pomodoro. You can choose the days/time and interval for the reminder. Any time you are not tracking time a windows notification reminds you to track time. This reminder should help to stick more with following Pomodoro.

Toggle Reminder

Toggl does not have any built-in option to start up when Windows start, but it’s easy enough to add the shortcut to Toggl in the Windows startup folder.

Happy tracking!

Building Windows Service Installer on Azure Devops

Continuosly building a windows installer on Azure DevOps using VdProj or WIX

Recently I was looking into packaging a Windows Service as an MSI installer. I wanted the MSI created in the build pipeline, in this case Azure DevOps, and publish the MSI as a build artifact. The windows service uses .Net Framework and looking around for installer options I found mainly two approaches discussed below.

Visual Studio Installer Projects (*.VdProj)

Microsoft Visual Studio Installer Projects is available as an extension to Visual Studio and provides support for Visual Studio Installer Projects in Visual Studio. By adding this setup project to the solution, you can create a setup file that steps through a wizard and installs your application. If you are looking to how to set up the Installer project, this stackoverflow answer shows you exactly how. Once you have the installer project set up locally and have the MSI file generated on building solution, we can set this up in Azure DevOps pipeline and automate it.

The Visual Studio Installer Projects require a custom build agent.

The only way I could find to get the Installer Project to run and build out an MSI file was to set up a custom build agent. Hosted agents do not support this at the moment. I set the custom agent on a Windows machine and have not tried on any of the other variants. The only tricky thing with setting up the custom agent was step 4 under Prepare Permissions. To find the scope ‘Agent Pools (read, manage)’, make sure you click the ‘Show all/less scopes’ link towards the bottom of the page (as shown in the image below) - At times some things just miss your eyes! Rest was pretty straightforward, and you can have the custom build agent set up in minutes.

Azure DevOps - Custom Agent token setup

In your build pipeline definition make sure to select the new custom agent as your default Agent pool. The Build VS Installer is a custom task that can be used to build your Visual Studio Installer projects (.vdproj files). Since MSBuild cannot be used to build these projects you need to make sure you have Visual Studio installed on the agent with the Installer Projects extension installed. Setting up the custom task is straightforward - you can either choose to build just one particular installer-project in the solution or all of them in the solution.

Azure Devops - Build Pipeline

I ran into the error message An error occurred while validating. HRESULT = ‘8000000A’, when running this build through the pipeline. Soon figured out that this was faced by others in the past. Running the DisableOutOfProcBuild.exe solved the issue. To do this in the pipeline add a command line task (Set EnableOutOfProcBuild step in the image above) and use the scripts based on the appropriate VS version.

Make sure to either select the ‘Create artifact for .msi’ option in the custom build task or manually copy it out to the artifacts directory. The build now generates an MSI every time!

WIX

WiX is an open source project that provides a set of tools that build Windows Installation Packages. The installer packages are XML based, and the learning curve is relatively steep. However, it offers a lot more features and capabilities over the Visual Studio installer project we saw above. Microsoft hosted agents support building WIX projects, and I was able to successfully run them on the Hosted VS2017 agent.

WIX projects can run on the Hosted VS2017 agent. Just this one reason makes WIX a far better choice over VdProj if you are starting fresh.

Azure Devops WIX

If you are running on a custom build agent, you will have to install the Wix toolset for everything to work. The default build task in Azure DevOps is all that is required to build the project as WIX integrates well with MSBuild. As you can see WIX it is easier setup and lesser hassles, so definitely recommend using that path if you are not already with VdProj.

Hope this helps you set up building installer projects on Azure DevOps.

Brisbane has a lot of places to go around especially those you can cover in a day. We usually prefer starting in the morning at about 8 am and reach back by around 2-3pm. Most of the places we carried food and do kind of a small picnic and my son, Gautham enjoys it a lot as we do. This is all possible because of my wife, Parvathy and special kudos to her culinary skills.

TLDR;

Lake Moogerah

Lake Moogerah makes an excellent place for a day trip or even camp overnight with its scenic beauty and activities around. You can go boating, take a stroll over the Moogerah Dam Wall or hike up the mountains for a great view. This place has got everything in one spot and makes a perfect place for the entire family.

Lake Moogerah

Venman Bushland

Venman Bushland National Park is still one of my favorites walks around Brisbane. The park is also home to a lot of wildlife, and you might be lucky to see some if you keep your eyes open. We spotted a wallaby towards the end of the walk.

Venman Bushland National Park

Gold Coast and Sunshine Coast

Gold Coast needs no introduction. If you are in Brisbane, there is every chance that you have already heard and been there. There is something here for everyone. Beaches, surfing, theme parks are to name a few. There are around five theme parks which in itself takes a day each. Getting a yearly pass helps and you can go back as many times as you want. Sealife and Wet’n’Wild are the ones that we go often.

Gold Coast

Head over to the opposite direction of Gold Coast, and you can reach Sunshine Coast which also has a lot to offer for a day tripper. The Sealife is the right place for a day trip, and it has a Octonauts zone which is one of Gautham’s favorite.

Noosa Heads

Surrounded by beach, river, hinterland and national parks, Noosa provides a wide range of activities and adventures. Check out the Noosa Markets if you are there on a Sunday.

Noosa Heads

Tin Can Bay

If you fancy feeding wild dolphins, Tin Can Bay is the place to do that. It does get a bit crowded (even on a weekday), but everybody gets a chance. You might have to start early if you want to make it by 7 in the morning at the center. We stayed there overnight and clubbed Noosa Heads on the way. On the way back we went to the Rainbow beach, where you can drive along the beach if you are interested.

Tin Can Bay

Great White Rock

At the Great White Rock, you can enjoy a wide range of activities including hiking, bird-watching, horse riding, mountain bike riding etc to name a few. There are multiple hiking trails and makes it perfect for all ages.

Great White Rock

Mt Coot-tha

Located close to the city, Mount Coot-tha has a lot to provide. Don’t miss the scenic views from the lookout especially great during sunrise and sunset. There are also multiple bushwalking trails including a kids trail.. There is a Planetarium located in the Brisbane Botanic Gardens which has various shows and activities. The lookout is also a good ride up from the city if you are into cycling.

Mount Coot Tha

Glasshouse Mountains

Glass House Mountains are remnants of volcanic activity and these volcanic mountains is a perfect day trip location. Good trails and lookouts along the way makes the drive there an enjoyable one as well.

Glasshouse Mountains

Tamborine

The Tamborine mountains has a lot to offer and will make you come back for more. Lots of different trails, Skywalk, Glow Worm Caves, waterfalls are just a few. The glow worm caves is a unique experience and worth visiting and helping them serve a cause of protecting the species.

Tamborine

Springbrook

Standing on top of an ancient volcano, Springbrook is just an hour drive from Brisbane and has views that stretch forever. You can see some of the oldest trees in Australia, cooling swimming holes and walking trails. Don’t miss out on the Natural Bridge, a picturesque rock formation, formed naturally by the waterfall over the basalt cave.

Springbrook

Nerima Gardens

Nerima Gardens are the Japanese Gardens of Ipswich and makes it an excellent getaway for the family. Right next to the gardens is the Ipswich Nature Center which houses a variety of animals and birds. Admission is free to these parks which makes it even better (however hey really appreciate some donations)

Nerima Gardens

Mt Nebo and Mt Glorious

Mount Glorious and Mount Nebo and know for their bushwalking trails. The mountains are next to each other however best enjoyed over multiple days. There are tracks of varying levels making it suitable for people of all fitness levels.

Mount Glorious and Mount Nebo

Eat Street

With great city and river views and open only from Friday to Sunday, the Eat Street is a unique experience that you can get in Brisbane. Lots of food options and entertainments make this a lively place. Make sure to check out their site for special events, jumping castles, etc to keep the little ones in the family busy. There is a small entry fee, but make sure you get stamped if you are going out and want to return the same day.

Eat Street

Carseldine Markets

The Carseldine Farmers and Artisan Markets is a great way to spend your Saturday morning checking out the local produce, arts, and crafts along with some good food and coffee. There is no entry fee and has lots of parking as well. There are a lot of other similar markets as well around Brisbane. A quick googling should help you find out the ones nearer to you.

Carseldine Markets

Always make sure to check out the place details and general tips before heading out especially if you are hiking. Carry enough water, sunscreen, insect repellants, etc. I take along the Camelbak Octane XCT on such trips which hold enough water for three of us. Check out the hiking checklist for more detailed instructions.

Enjoy your weekends and sound off in the comments what other places you recommend checking out in and around Brisbane.

Over the last week, I have been reading the book Digital Minimalism by Cal Newport. The central idea of the book is about being aware of the various technologies affecting our lives and making a conscious effort to choose those are required and adds value to your life.

Minimalism is the art of knowing how much is just enough. Digital Minimalism applies this idea to our personal technology. It’s the key to living a focused life in an increasingly noisy world. - Cal Newport

State of My Online Life

Before starting with the book, I have to admit that my online life was not that ordered and thought through. However I was aware of the social media application’s taking a significant part of my time, casually browsing without giving any value in return. I had intentionally stopped using Facebook for a couple of months, and now I am completely off it. Instagram was soon to follow except for some occasional posting. I had uninstalled both apps from my phone as it was the main access points to these sites. I had intentionally turned off all notifications on the phone for a long time and found it really helpful.

However, what I was not aware of was with these two applications gone I soon started relying on other apps to fill in its place. I am into running and found myself spending more time on Strava. For the social part, I started being more into WhatsApp, YouTube, LinkedIn, and Twitter. When nothing was there, I was hanging on to the email applications pulling to see if anything interesting came in (as if I was expecting a million dollars email). Only after starting with the book did I start realizing that these apps had taken over the ones that I had given up.

Digital Declutter

The title of the book, ‘Digital Minimalism’, immediately caught my attention when I first saw it and I was keen to read it. Primarily I wanted to reduce my phone usage as most of my time was going away there. Going through the digital declutter phase, I took note of all the technologies and application that are on my phone. The Screen Time feature on iPhone (you can use Digital Wellbeing if you are on an Android or install RescueTime application) started making me more aware of the time that I have been spending on phone and the apps that took most of my time.

Screen Time Report on Iphone

I realized that a majority of my time was spent on WhatsApp especially on Group Chats and scrolling through all the forwarded messages/videos in them and always checking back for more content. Even though I have a Kindle, I was reading more on the Kindle app on phone. Most of the time when reading I would be distracted by something else and wander off to a different app. Even though I had notification turned off new features like Badges took its place which started pulling me again into the apps.

After noting done all the apps and analyzing them, I started decluttering my phone.

  • Exited all Whatsapp Group Chats
  • Removed the below apps
    • Emails (Gmail and Outlook)
    • Slack
    • Strava
    • Yammer
    • LinkedIn
    • Kindle
  • Disabled Badges notification
  • Disabled (Raise to Wake)[https://support.apple.com/en-au/HT208081]: This is one of the features that lure you to look into the phone even if you did not intend to do.
  • Microsoft Teams: Initially I had removed Teams, but realized that was the only way to communicate with my Readify team members quickly. So decided to install it back.

By deleting the Kindle app, I am looking to force myself not to use the phone for reading books. I am keen to try out the idea of reading physical books and take notes while reading, an idea that struck me while reading the Bullet Journal blog (This is also where I came across the book ‘Digital Minimalism’ the first time.). I was in India recently and took advantage of getting books for a much lower price and gifted myself some self-help books and a few others.

Books

Interestingly I also came across Bullet Journaling at the same time, which aligns with Digital Minimalism as it forces you to sync to paper your ideas, thoughts, and to-dos as opposed to a digital system. I have started trying this out alongside and find it helpful — more on it in a different blog post.

Bullet Journal

Digital Minimalism Is An Ongoing Process

Digital Minimalism is not a one-time activity, but something to perform on an ongoing basis and any time you think of adding a new technology into your life. It has just been over a week that I have started decluttering my online life, and I am already finding benefits. I pick up my phone less often and have lesser a mental load to keep track off.

Phone Pickups after Decluttering

I plan to do the same decluttering process with my laptop once I get into the flow of the process. Decluttering is a great way to bring in more focus in your life and gives you a lot more time than you previously had. How decluttered is your online life?

Tip of the Week: Squoosh - Make Images Smaller

Compress the images that you share online.

A while back I wrote about PNGGauntlet, a windows application that allows reducing image size if they are in PNG format. If you are looking for something that can handle any image format and available through the browser then check out Squoosh.

Squoosh is an image compression web app that allows you to dive into the advanced options provided by various image compressors.

Squoosh offers multiple compression options and defaults to MozJPEG. It also gives some advanced settings that you can use to play around to find one that best suite for your needs. The default compression settings itself provides a huge benefit (69% reduction in size) as in the image above. I use this primarily for the images that I share on this blog.

Squoosh your images!

Windows Service Using Topshelf, Quartz and Autofac

Walkthrough of setting up a recurrent job scheduler.

Whenever there is a need for some automated jobs that have to run On-Prem for a client, my default choice has been to use Windows Service along with Quartz. Last year I had blogged about one such instance. However, I did not get into the detail of setting up the project and associated dependencies to run the service.

In this post, I will walk through how I got about setting up these recurrent job scheduler to make it easy for me or anyone else who runs into a similar situation.

Topshelf

Topshelf makes the creation of windows services easy by giving the ability to run it as a console application while developing it and easily deploy it as a service. Setting up Topshelf is straight forward - All you need is a console application (targetting the .Net Framework) and add reference to Topshelf Nuget package. To set up the service, you need to modify the Program.cs with some setup code for creating the windows services and setting some service metadata.

Autofac

With Topshelf setup, we have a running windows service application. To add in dependency injection so that you do not have to wire up all your dependencies manually, you can use Autofac. The Topshelf.Autofac library helps integrate Topshelf and Autofac DI container. Autofac can be easily integrated with Topshelf using the library and passing in the container instance to the UseAutofacContainer extension method on HostConfigurator.

var container = Bootstrapper.BuildContainer();

var rc = HostFactory.Run(x =>
{
    x.UseAutofacContainer(container);
    
    x.Service<SchedulerService>(s =>
    {
        s.ConstructUsingAutofacContainer();
        s.WhenStarted(tc => tc.Start(config));
        s.WhenStopped(tc => tc.Stop());
    });

});

Quartz.Net

The SchedulerService will now be instantiated using the Autofac container and makes it easy to inject dependencies into it. We need to be able to schedule jobs within the SchedulerService hence inject an IScheduler from Quartz.Net. You can add a reference to Quartz Nuget package, and you are all set to run jobs on schedule. To integrate Quartz with Autofac so that job dependencies can also be injected in via the container we need to use Autofac.Extras.Quartz Nuget.

Wiring it Up

Below is a sample setup of the Autofac container that registers jobs (MySyncJob) in assembly and adds scheduler instance to the container (using the QuartzAutofacFactoryModule). The IDbConnection is registered to match the lifetime scope of quartz job so that each job instance gets a different instance.

public static class Bootstrapper
{
    public static IContainer BuildContainer()
    {
        var builder = new ContainerBuilder();
        builder.RegisterType<SchedulerService>();

        var schedulerConfig = new NameValueCollection
        {
            { "quartz.scheduler.instanceName", "MyScheduler" },
            { "quartz.jobStore.type", "Quartz.Simpl.RAMJobStore, Quartz" },
            { "quartz.threadPool.threadCount", "3" }
        };

        builder.RegisterModule(new QuartzAutofacFactoryModule
        {
            ConfigurationProvider = c => schedulerConfig
        });

        builder.RegisterModule(new QuartzAutofacJobsModule(typeof(MySyncJob).Assembly));

        var connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;
        builder
           .RegisterType<SqlConnection>()
           .WithParameter("connectionString", connectionString)
           .As<IDbConnection>()
           .InstancePerMatchingLifetimeScope(QuartzAutofacFactoryModule.LifetimeScopeName);
       
        // Other registrations

        var container = builder.Build();
        return container;
    }
}

The SchedulerService class is used to start the scheduler jobs when the service starts up and shut down the jobs when service is shut down. As you can see the IScheduler instance is constructor injected using Autofac. On start, add the jobs to the scheduler(i am using a cron schedule in the example below).

public class SchedulerService
{
    private readonly IScheduler _scheduler;

    public SchedulerService(IScheduler scheduler)
    {
        _scheduler = scheduler;
    }

    public void Start(ScheduleConfig config)
    {
        ScheduleJobs(config);
        _scheduler.Start().ConfigureAwait(false).GetAwaiter().GetResult();
    }

    private void ScheduleJobs(ScheduleConfig config)
    {
        ScheduleJobWithCronSchedule<MySyncJob>(config.MySyncJobSchedule);
        ScheduleJobWithCronSchedule<MyOtherSyncJob>(config.MyOtherSyncJobSchedule);
    }

    private void ScheduleJobWithCronSchedule<T>(string cronShedule) where T : IJob
    {
        var jobName = typeof(T).Name;
        var job = JobBuilder
            .Create<T>()
            .WithIdentity(jobName, $"{jobName}-Group")
            .Build();

        var cronTrigger = TriggerBuilder
            .Create()
            .WithIdentity($"{jobName}-Trigger")
            .StartNow()
            .WithCronSchedule(cronShedule)
            .ForJob(job)
            .Build();

        _scheduler.ScheduleJob(cronTrigger);
    }

    public void Stop()
    {
        _scheduler.Shutdown().ConfigureAwait(false).GetAwaiter().GetResult();
    }
}

The Sync jobs have its dependencies which are again injected using Autofac container. Adding in new jobs is easy, and all we need to make sure is that it gets set up with the appropriate schedule and register its dependencies in the container.

Hope this helps you with setting up recurring scheduler jobs for on-prem scenarios.

Migrating Octopress To Hugo

Migrated my blog again - Here's how I went about doing it.

I have been on Octopress blogging platform for around 5 years and was fairly happy with it. I had optimized Octopress workflow for new posts, set it up for continuous delivery and also enabled scheduling posts in the future.

I have been wanting to migrate off Octopress since a while (reasons below) but have been putting it off since I did not want to go through another migration pain. Now that it is all done, the migration was not as hard as I thought. In this post I will walk through the reasons of migrating away from Octopress, the actual migration steps involved and tweaking the default Hugo settings/theme and workflow to get what I wanted.

Reasons To Migrate

  • No Longer Maintained: Octopress is no longer maintained by anyone and it’s hard to keep up with all the dependent library updates and ruby version changes. I have my builds breaking randomly for dependent package updates and it was not something I liked dealing with.

  • Terribly Slow: To build by full site Octopress takes around 2 minutes. Since I have modified my workflow to build only the draft posts when in local I usually don’t have to wait that long, but still it’s slow.

These two reasons were pressing enough to migrate off Octopress. Hugo was the natural choice for its speed and community and is the next highest rated after Jekyll(/Octopress). I also chose to migrate away from Azure Hosting and use Netlify to host this blog.

Why Netlify? With Octopress I had my build pipeline push the generated site contents back into GitHub (to a separate branch) and then have Azure deploy that branch automatically using Github trigger. If I were to remain on Azure, I would have to do almost the same. Netlify comes with a hugo template. The template is automatically detected when pointed to your repository and sets up all that is required to deploy the generated static content. All I had to update was to set the correct Build Environment Variable for HUGO_VERSION.

Netlify can host your Hugo site with CDN, continuous deployment, 1-click HTTPS, an admin GUI, and its own CLI.

I moved this site over to HTTPS a while back and had been using Cloudflare’s Shared SSL. Moving over to the free Let’s Encrypt certificate required additional setup on Azure. Netlify takes out all this complexity and handles this all for you in the background. Once you set up a custom domain, it’s provisioned with a Let’s Encrypt certificate.

Migration

The actual migration of the content was mostly related to moving all the files and fixing up some code blocks.

  • Move Files Moving files was easy, as both platforms support Markdown. Everything from your source folder maps into the content folder in Hugo. Pages that were in folders in Octopress are now Markdown files with the appropriate name in Hugo. The actual posts in Markdown had the date appended in them. Using a PowerShell script, I stripped off the first 11 characters (YYYY-MM-DD-) and moved them into the blog folder (since all my blog posts are under the /blog URL path). All my images live under the static folder. Octopress to Hugo - Files

  • Fixing Code Blocks Octopress supported adding a custom title on the code block which is not available out of the box in Hugo. You can use a custom shortcode to set this up. However, I chose to remove them as there were only a few code blocks that had them. Using regex search in VSCode, it’s easy to get rid of them at once.

Configuring Hugo

With all the content ported over all that was left was to select a theme a customize it, make sure all existing URLs work on the new site, configure search and a few other things.

After a bit of hunting around for themes, I decided to go with Minimo for its simplicity and supporting most of the configurations of Hugo. All my theme overrides are in the layouts folder. I have added in support for showing a paged list of all my blog posts and a few layout changes for list views and headers. Added in a google custom search engine support as a widgetand added it to the sidebar. Set up the 404 custom page to enable searching site for content.

The site is now running on Hugo + Netlify. Building the whole site (not just the drafts) takes around 3-4 seconds (cold build) and on every file change after the build watcher is running takes around 200 milliseconds. Hugo is blazing fast. If you face any issues, have any feedback kindly drop a comment or send me a tweet.

← Previous Posts