While playing around with the Windows Terminal, I had set up Aliasing to enable alias for commonly used commands.

For e.g. Typing in s implies git status.

I wanted to create new command aliases from the command line itself, instead of opening up the script file and modifying it manually. So I created a PowerShell function for it.

$aliasFilePath = "<Alias file path>"

function New-CommandAlias {
param(
    [parameter(Mandatory=$true)]$CommandName,
    [parameter(Mandatory=$true)]$Command,
    [parameter(Mandatory=$true)]$CommandAlias
    )

    $functionFormat = "function $commandName { & $command $args }
New-Alias -Name $commandAlias -Value $commandName -Force -Option AllScope"

    $newLine = [Environment]::NewLine
    Add-Content -Path $aliasFilePath -Value "$newLine$functionFormat"
}

. $aliasFilePath

The script does override existing alias with the same name. Use the ‘Get-Alias’ cmdlet to find existing aliases.

The above script writes a new function and maps it to the alias command using the existing New-Alias cmdlet

function Get-GitStatus { & git status -sb $args }
New-Alias -Name s -Value Get-GitStatus -Force -Option AllScope

Add this to your PowerShell profile file (run notepad \$PROFILE) as we did for theming when we set up the windows terminal. In the above script, I write to the ‘\$aliasFIlePath’ and load all the alias from that file using the Dot sourcing operator.

Below are a few sample usages

New-CommandAlias -CommandName "Get-GitStatus" -Command "git status -sb" -CommandAlias "s"
New-CommandAlias -CommandName "Move-ToWorkFolder" -Command "cd C:\Work\" -CommandAlias "mwf"

The full gist is available here. I have tried adding only a couple of commands, and it did work fine. If you find any issues, please drop a comment.

For a long time, I have been using the Cmder as my command line. It was mostly for the ability to copy-paste, open multiple tabs, and the ability to add aliases (shortcut command). I was never particularly interested in other customizations of the command line. However, one of these recent tweets made me explore the new Windows Terminal.

Windows Terminal is a new, modern, feature-rich, productive terminal application for command-line users. It includes many of the features most frequently requested by the Windows command-line community, including support for tabs, rich text, globalization, configurability, theming & styling, and more.

You can install using the command line itself or get it from the Windows Store. I prefer the Windows Store version as it gets automatically updated.

Toggling

Pressing WIN Key (WIndows Key) + # (the position of the app on the taskbar) works as toggle. If the app is open and selected, it will minimize, if not, it will bring to the front and selects it. If the app is not running, it will start the app.

In my case, Windows Key + 1 launches Terminal, Windows Key + 2 launches Chrome, Windows Key + 3 launches Visual Studio and so on.

Theming

To theme the terminal, you need to install two PowerShell modules.

Install-Module posh-git -Scope CurrentUser
Install-Module oh-my-posh -Scope CurrentUser

To load these modules by default on launching PowerShell, update the PowerShell profile. For this run ‘notepad $PROFILE’ from a PowerShell command line. Add the below lines to the end of the file and save. You can choose an existing theme or even make a custom one. You can further customize this as you want. Here is a great example to get started. I use the Paradox theme currently.

Import-Module posh-git
Import-Module oh-my-posh
Set-Theme Paradox

Restart the prompt, and if you see squares or weird-looking characters, you likely need some updated fonts. Head over to Nerd Fonts, where you can browse for them.

Nerd Fonts patches developer targeted fonts with a high number of glyphs (icons). and gives all those cool icons in the prompt.

To make windows Terminal use the new font, update the settings. Click the button with a down arrow right next to the tabs or use Ctrl + , shortcut. It opens the profiles.json setting file where you can update the font face per profile.

"fontFace": "UbuntuMono NF",

Aliasing

I use the command line mostly for interacting with git repositories and like having shorter commands for commonly used commands, like git status, git commit, etc. My previous command-line, Cmder, had a feature to set alias. Similarly, in PowerShell, we can create a function to wrap the git command and then use the New-Alias cmdlet to create an alias. You can find a good list to start with here and modify them as you need. I have the list of alias in a separate file and load it in the Profile as below. Having it in Dropbox allows me to sync it across to multiple devices and have the same alias everywhere.

Use the Dot sourcing operator to run the script in the current scope and make everything in the specified file added to the current scope.

. C:\Users\rahul\Dropbox\poweshell_alias.ps1

The alias does override any existing alias with the same name, so make sure that you use aliases that don’t conflict with anything that you already use. Here is the powershell_alias file that I use.

I no longer use Cmder and enjoy using the new Terminal. I have just scratched the surface of the terminal here, and there are heaps more that you can format, customize, add other shells, etc.

Enjoy the new Terminal!

References:

At work, we usually DbUp changes to SQL Server. We follow certain naming conventions when creating table constraints and Indexes. Here is an example

create table Product
(
  Id uniqueidentifier not null unique,
  CategoryId uniqueidentifier not null,
  VendorId uniqueidentifier not null,

  constraint PK_Product primary key clustered (Id),
  constraint FK_Product_Category foreign key (CategoryId) references Category (Id),
  constraint FK_Product_Vendor foreign key (VendorId) references Vendor (Id)
)

create index IX_Product_CategoryId on Product (CategoryId);

I had to rename a table as part of a new feature. I could have just renamed the table name and moved on, but I wanted all the constraints and indexes also renamed to match the name convention. I could not find any easy way to do this and decided to script it.

If you know of a tool that can do this, let know in the comments and stop reading any further 😄.

Since I have been playing around with F# for a while, I chose to write it in that. The SQL Server Management Objects (SMO) provides a collection of objects to manage SQL Server programmatically, and it can be used from F# as well. Using the #I and #r directives, the SMO library path and DLL’s can be referred.

#I @"C:\Program Files\Microsoft SQL Server\140\SDK\Assemblies\";;
#I @"C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies";;
#r "Microsoft.SqlServer.Smo.dll";;
#r "Microsoft.SqlServer.ConnectionInfo.dll";;
#r "Microsoft.SqlServer.Management.Sdk.Sfc.dll";;

The SMO object model is a hierarchy of objects with the Server as the top-level object. Given a server name, we can start navigating through the entire structure and interact with the related objects. Below is how we can narrow down to the table that we want to rename.

let generateRenameScripts (serverName:string) (databaseName:string) (oldTableName:string) newTableName = 
    let server = Server(serverName)
    let db = server.Databases.[databaseName]
    let oldTable = db.Tables |> Seq.cast |> Seq.tryFind (fun (t:Table) -> t.Name = oldTableName)

SMO does allow generating scripts programmatically, very similar to how SSMS allows to right-click on a table and generate relevant scripts. The ScriptingOptions class allows passing in various parameters determining the scripts generated. Below is how I create the drop and create scripts.

let generateScripts scriptingOpitons (table:Table) = 
    let indexes = table.Indexes |> Seq.cast |> Seq.collect (fun (index:Index) -> (index.Script scriptingOpitons |> Seq.cast<string>)) 
    let fks = table.ForeignKeys |> Seq.cast |> Seq.collect (fun (fk:ForeignKey) -> fk.Script scriptingOpitons |> Seq.cast<string>)
    let all = Seq.concat [fks; indexes]
    Seq.toList all

let generateDropScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions(ScriptDrops = true, DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

let generateCreateScripts (table:Table) =
    let scriptingOpitons = ScriptingOptions( DriAll = true, DriAllKeys = true, DriPrimaryKey = true, SchemaQualify = false)
    generateScripts scriptingOpitons table

For the create scripts, I do a string replace of the old table name with the new table name. The full gist is available here.

Below is what the script generated for renaming the above table from ‘Product’ to ‘ProductRenamed’. This output can further be optimized, passing in the appropriate parameters to the ScriptingOptions class.

let script = generateRenameScripts "(localdb)\\MSSQLLocalDB" "Warehouse" "Product" "ProductRenamed"
File.WriteAllLines (@"C:\Work\Scripts\test.sql", script) |> ignore
ALTER TABLE [Product] DROP CONSTRAINT [FK_Product_Category]
ALTER TABLE [Product] DROP CONSTRAINT [FK_Product_Vendor]
DROP INDEX [IX_Product_CategoryId] ON [Product]
ALTER TABLE [Product] DROP CONSTRAINT [PK_Product] WITH ( ONLINE = OFF )
ALTER TABLE [Product] DROP CONSTRAINT [UQ__Product__3214EC065B6D1E82]
EXEC sp_rename 'Product', 'ProductRenamed'
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Category] FOREIGN KEY([CategoryId])
REFERENCES [Category] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Category]
ALTER TABLE [ProductRenamed]  WITH CHECK ADD  CONSTRAINT [FK_ProductRenamed_Vendor] FOREIGN KEY([VendorId])
REFERENCES [Vendor] ([Id])
ALTER TABLE [ProductRenamed] CHECK CONSTRAINT [FK_ProductRenamed_Vendor]
CREATE NONCLUSTERED INDEX [IX_ProductRenamed_CategoryId] ON [ProductRenamed]
(
  [CategoryId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
ALTER TABLE [ProductRenamed] ADD  CONSTRAINT [PK_ProductRenamed] PRIMARY KEY CLUSTERED 
(
  [Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
ALTER TABLE [ProductRenamed] ADD UNIQUE NONCLUSTERED 
(
  [Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

One thing that is missing at the moment is renaming foreign key references from other tables in the database to this newly renamed table. The FSharp code is possible not at its best, and I still have a lot of influence from C#. If you have any suggestions making better sound off in the comments

Hope this helps and makes it easy to rename a table and update all associated naming conventions.

I am always on the lookout for productivity hacks and new systems to improve the way I work. Given my nature of work is in front of a computer, my productivity tool choices have always been digital. However, at the start of this year, I came across an interesting book Digital Minimalism. After reading the book, I have changed my de-facto relationship with many apps and the phone in general. I discovered the book while I was skimming through a blog post related to Bullet Journaling. Given that Bullet Journaling also favors disconnecting from devices, I decided to give it a try.

Though it does require a journal, Bullet Journal is a methodology. It’s best described as a mindfulness practice disguised as a productivity system. It’s designed to help you organize your what while you remain mindful of your why. The goal of the Bullet Journal is to help its practitioners (Bullet Journalists) live intentional lives, ones that are both productive and meaningful.

If you are new to Bullet Journaling check out the quick introduction to Bullet Journaling and also the Recommended reading at the end of this post.

I have been bullet journaling since February 2019, but it is only recently that I have started finding it more useful and have built a workflow around it. When I started with this method, I was using it more as a reactive journaling tool and mostly capturing things after they happened. Even though I did plan ahead a few things and put it in the journal, I did not have any formalized practice around this. Off late, I came across the Youtube channel by Matt Ragland who shares a lot of tips around bullet journaling and the various strategies that he uses around it to get more productive. I took on some inspirations from it and also from a few other sources and tweaked my existing process.

Journaling Supplies

If you search for Bullet Journaling on the internet, you will likely see a lot of artistic pages, people talking about the various stationary items (pens, stickers, markers, washi tapes, etc.). But if you are like me (not very artistic) and don’t care much about it, ignore all that. All you need is a pen and paper (preferably a book). Even though there is an official bullet journal notebook, this does work on any notebook/journal that you have. I use a dot-grid journal as it gives proper alignment and guides to draw lines and align lists. Below is what I use for bullet journaling.

Most of the non-artistic bullet journalists go by the name ‘Minimalist Bullet Journal,’ - if you are looking for inspiration on the internet.

Brain Dump

If you are starting fresh and trying to organize yourself, one of the first activities that I suggest to do is a Brain Dump. At any point in time, there are a lot of things in my mind and things that I kept committing to myself and others. It is not possible to keep up with everything that I wish to do. So the very first thing to do is to dump everything out onto paper and then decide what needs attention. The Incompletion Trigger List assists in getting everything out of your mind onto paper. It’s a good idea to block some time of yours to perform this exercise and give it all the attention it needs. At times it helps to Slow Down to Go Fast.

Once you have the brain dumped onto paper, try and choose two to four top priority items that you want to focus on. I prefer to choose things with a different theme. E.g., my main three items (in no particular order) are:

  • Fitness (Running, Cycling and Swimming)
  • Blog and Youtube Channel
  • Learning

Monthly Planning

Before the start of every month (usually the last day of the previous month), I plan for the upcoming month and capture some of the key things I want to achieve that month. Try and align things with the top priority items and helps move those items forward. E.g., I might list down the actual blog posts that I am going to publish that month and also the Youtube Videos. That said, I am not always that organized to come up with blog posts upfront - like the month below (in the image). As for fitness, I plan for any events happening or that I want to attend, etc. Usually, I follow some running plan that spans across a couple of months, so I don’t have to work out any such details every month.

I add in a habit tracker for some of the key things that I want to track for that month. E.g., I try to stretch every day after waking up, drink lots of water, make sure I journal, etc. So I track this every day using my habit tracker (More on it on the Daily Planning section). For reminders and things that need to happen on specific days, I put them in Todoist as well. I copy this over into the month highlights, if any, for the current month.

Weekly Planning

Every Sunday evening, I spend 15-20 minutes to map out the upcoming week. It involves reflecting on the week that has passed, carrying over unfinished items, capturing things for the coming week, etc. I capture important meetings at work and any required follow-ups. I create my running workout plans in Garmin Calendar and sync that back with my watch.

Daily Planning

Every day before bed, I spend 5-10 minutes updating my bullet journal and plan for the next day. Looking at the week’s plan, I work out the tasks for the following day and try to break up into smaller items to move each item on the list forward. I update the habit tracker and cross off items that I have achieved for the day. The idea with the habit tracker is to try and not break the chain. Even if I miss one day, I try to make sure I get that done the following. If any new items have come into Todoist, I copy that over for the day.

Sticking to these three planning techniques has helped me be more focused and get more things done. The very act of writing things down and rewriting them again (when undone) forces me to be mindful of what I am committing to and doing. I still struggle on the execution side at times and procrastinate. There are days I don’t get anything done. Daily planning helps me reflect over those days and be more mindful about it the following day. It helps even out the ups and downs over a period and helps me get things done. The very act of committing to paper gives a stronger sense to complete the task and stick with the plan.

I hope this helps you on your journey!

In an earlier post, we saw how to enable Role-Based Access for .Net Core Web applications. We used hardcoded AD Group Id’s in the application as below

"AdGroups": [
  {
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
  },
  {
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"
  }
]

To avoid hardcoding the id’s in the application config, we can use the Graph API to query the AD groups at runtime. The GraphServiceClient from the Microsoft.Graph NuGet package can be used to connect to the Graph API. In this post, we will see how to use the API client to retrieve the AD groups. We will see two authentication mechanisms for the Graph API - one using client credentials and also using Managed Service Identity.


Using Client Credentials

To authenticate using Client Id and secret, we need to create an AD App in the Azure portal. Add a new client secret under the ‘Certificates & Secrets’ tab. To access the Graph API, make sure to add permissions under the ‘API permissions’ tab, as shown below. I have added the required permissions to read the AD Groups.

private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var clientId = "AD APP ID";
    var secret = "AD APP Secret";
    var domain = "mydomain.onmicrosoft.com";

    var credentials = new ClientCredential(clientId, secret);
    var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{domain}/");
    var token = await authContext.AcquireTokenAsync("https://graph.microsoft.com/", credentials);
    var accessToken = token.AccessToken;

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}

Using Managed Service Identity

With the client credentials approach, we have to manage the AD app and the associated secrets. To avoid this, we can use Managed Service identity (MSI), and the Azure infrastructure will do this for us automatically. To use MSI, turn on Identity for the Azure Web App from the Azure Portal.

For the MSI service principal to access Microsoft Graph API, we need to assign appropriate permissions. This is not possible through the Azure Portal, and we need to use PowerShell script. As before, we only need permission to read the Azure AD Groups. ‘00000003-0000-0000-c000-000000000000’ is known Application ID of the Microsoft Graph API. Using that, we can filter out the App Roles to read the AD Group permissions.

Connect-AzureAD
$graph = Get-AzureADServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
$groupReadPermission = $graph.AppRoles `
    | where Value -Like "Group.Read.All" `
    | Select-Object -First 1

# Use the Object Id as shown in the image above
$msi = Get-AzureADServicePrincipal -ObjectId <WEB APP MSI Identity>

New-AzureADServiceAppRoleAssignment `
    -Id $groupReadPermission.Id `
    -ObjectId $msi.ObjectId ` 
    -PrincipalId $msi.ObjectId `
    -ResourceId $graph.ObjectId

As we have seen in the previous instances with MSI (here and here) we use the AzureServiceTokenProvider to authenticate with MSI to get the token. The ClientId and secret are no longer required. The AzureServiceTokenProvider class tries to get a token using Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication. In our case, when deployed to Azure, the code uses MSI to get the token.

private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    string accessToken = await azureServiceTokenProvider
        .GetAccessTokenAsync("https://graph.microsoft.com/");

    var graphServiceClient = new GraphServiceClient(
        new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", accessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}

Getting AD Groups Using Graph Client

With the GraphApiClient, we can use it to get the Groups from the Azure AD as below. These groups can be used to configure the Authorization policy, as shown below.

 public void ConfigureServices(IServiceCollection services)
 {
     ...
     services.AddAuthorization(options =>
     {
         var adGroups = GetAdGroups();

         foreach (var adGroup in adGroups)
             options.AddPolicy(
                 adGroup.GroupName,
                 policy =>
                     policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
     });
     services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();
 }

 private static List<AdGroupConfig> GetAdGroups()
 {
     var client = GetGraphApiClient().Result;
     var allAdGroups = new List<AdGroupConfig>();

     var groups = client.Groups.Request().GetAsync().Result;

     while (groups.Count > 0)
     {
         allAdGroups.AddRange(
                    groups.Select(a => 
                        new AdGroupConfig() { GroupId = a.Id, GroupName = a.DisplayName }));

         if (groups.NextPageRequest != null)
             groups = groups.NextPageRequest.GetAsync().Result;
         else
             break;
     }

     return allAdGroups;
 }

The AD groups no longer need to be hardcoded in the application. Also, with Managed Service Identity, we do not need any additional AD app/credentials to be managed as part of the application.

Hope it helps!

Brisbane To Gold Coast Cycle Challenge, B2GC 2019

A short recap of the day - Very well organized event!

Brisbane To Gold Coast Cycle Challenge (B2GC) is a fun ride event - but for me, it was a race, a race against myself. The longest I had ridden before this was 50km. For B2GC, I had decided not to take the rest stops and head straight for it.

The event was very well organized. A big thank you to all the organizers and volunteers.

Things I carried on my bike (Check out my post here for specifics of my bike and accessories I use)

  • 4 Energy gels
  • 1 Oats bar - did not use
  • 1 bottle water + 1 bottle Electrolyte
  • Mini Toolkit + puncture kit + 1 spare tube + mini pump - Had all of these in my Aero Wedge
  • Wallet (Id Card + some cash)
  • Mobile Phone

Things in the bag (handed over at the cloakroom at start site)

  • Thongs/sandals - Wore it after the ride
  • A pair of clothes - did not use

B2GC 2019 was on September 15, 2019 - a warm and sunny day and perfect for riding. I woke up at 4 am and got ready. I had put my bike in the car the previous night and packed all the things. Made sure I had everything I needed. Said goodbye to my wife and started for UQ at 4: 30. It was a 30-minute drive and reached there at around 5 am. Lots of cyclists were already there and getting ready for the early start with the red category. I planned to start with the blue category (< 25km/hr).

Parked my car at P10 - UQ Centre car park as instructed in the web site and that was quite close to the start point. I am not familiar with putting on the bike wheels, and it took around 10 minutes for me. Once all set I set off to the start point. Dropped off my bag at the cloakroom - they took a 2 dollar donation to get the bag over to the finish site at Gold Coast. Hit the loo after a short 10-minute queue. I started the ride at around 6 along with all the other blue bib holders, from the Eleanor Schonell Bridge – aka the Green Bridge.

The markings along the way were quite clear; there is no way someone would lose their way. Throughout the ride, I had fellow riders in my front and back. At most major intersections there were volunteers and police officers stopping the traffic and making way for the cyclists. I have not ridden much with the cycling shoes and cleats and did some face some difficulty clipping off and on at signals. For my work commute, I use regular running shoes, so I don’t have to carry an extra pair for work. In total, I got around 4-5 red stop signals (welcoming for me as I got to stretch my legs). At one signal, I did fumble a bit on my clipped-in side as I came to a stop. Lucky for me, I didn’t fall over.

When I crossed the finish line at Gold Coast, my Garmin showed a little over 91 km. I rode to the back of the finish line through a bikeway to make it a full 100. Though I started with the blue category, I finished as an Orange (with an average of 26.3 km/hr) and pretty happy with my finish time. After the race, there was food and coffee (paid) and lots of stalls. I ate a sausage and rested for a while. I had pre-booked my bus tickets, as part of my B2GC registration, to get back to Brisbane. The bikes were taken in a separate truck and to be dropped off at the same place as you board the bus. I took the 10:30 bus back and arrived in Brisbane around 11:45. The bicycles truck arrived about 10 minutes later, and I was back at my car by 12:15. Loaded the bike back in the car and headed home!

Had a great ride and kudos to everyone who participated in the event!

You can find all the photos I took (and the official ones of me) here

It’s not often that you want to debug into applications running on a Virtual Machine, but not to say that it is never required. Recently at one of my clients, I had to debug into an application running on an Azure Virtual machine. I wanted to debug an application with Azure AD group logic, and my laptop was not domain-joined. This called for remote debugging with the application running on a domain-joined virtual machine.


In this post, we will look at how to set up the Virtual Machine and Visual Studio for remote debugging. If you are interested in watching this in action, check out the video in the link above.

Setting up Virtual Machine

To be able to remote debug to a Virtual Machine it needs to be running the Remote Debugging Monitor (msvsmon). Assuming you have Visual Studio installed on your local machine (which is why you are trying to debug in the first place), you can find msvsmon under the folder - C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\Remote Debugger. The path might slightly different based on the version of Visual Studio and the Subscription (Professional, Community, Enterprise). The above path is for Visual Studio (VS) 2019 Professional version. Copy over the Remote Debugger folder into the virtual machine. Alternatively, you can also install the Remote Debugger tools from the internet. Make sure you download the version that matches the Visual Studio version that you will be using to debug.

Run the msvsmon.exe as an Administrator, and it will prompt you to set up some firewall rules on the Virtual Machine as below.

Once confirmed it adds in the following firewall rules for the x64 and x86 versions of the application.

The Remote Debugging Monitor listens to a port (each Visual Studio version has a different default) - 4024 for VS 2019. This can be changed under Options if needed. For this example I have turned off Authentication as shown below.

Azure Portal Settings

In the Azure Portal, under the Networking tab for the Virtual Machine, add an inbound port rule to open the port that msvsmon is listening - 4024 in this case

Debugging From Visual Studio

Now that everything is set up, we are good to debug the application from our local machine. Make sure the application to be debugged is running in the Virtual Machine. From Debug -> Attach To Process in Visual Studio. Choose Remote (no authentication) under the Connection type and enter the IP address or the fqdn to the VM along with the port number. Shortly you should see the list of applications running on the machine, and you can choose the appropriate app to debug.

Visual Studio is now debugging the application running on the Virtual Machine. Hope this helps and happy debugging!

Cycling To Work: What's in My Bag

The best thing about not being able to remote work is THE COMMUTE

Commuting on a cycle to work is one thing that I look forward to every day. The initial inertia to get started is very high - but trust me, do it once, and you are very likely to continue doing it. I try to commute to work on my Giant TCR, three days a week. The other two days I go for an early morning run, so I take the bus to work. I use an Osprey Radial 34 as my commute bag. The bag in itself is excellent and fits comfortably everything I need for the day. It also provides exceptional riding comfort with the padded mesh back. I would recommend the 34L version if you cannot leave things at work (like a pair of Jeans etc.)

Below are the things that are typically there in my bag

  1. Osprey Radial 34
  2. Topeak Race Rocket HP Pump
  3. Topeak Mini 20 Pro Tool
  4. JetBlack Svelto Photochomatic Sunglasses Red/Black
  5. Spare tubes, puncture kit
  6. Work Laptop - Surface Pro or Lenovo X1 Extreme
  7. Logitech MX Master
  8. Bose QC 35 II - Black
  9. Bullet Journal
  10. Sakura Pigma Micron Pen
  11. Ikea Lunch Box
  12. Snacks (Bar, Banana etc.)

  1. Bike - Giant TCR Advanced Pro 1 2016
  2. Lezyne Macro 1100/ Strip PRO 300 Light Set - Black
  3. Topeak Aero Wedge QuickClip Saddle Bag Medium Size
  4. Kryptonite Keeper 785 Integrated Chain

At work (depending on the client I am with), I usually have access to a bike park area and also showers. I use the Kryptonite Keeper 785 Integrated Chain to secure it to the racks. On weekends, when I go for longer rides, I use the Topeak Aero Wedge QuickClip Saddle Bag to carry the mini tool, puncture kit, extra tubes, and keys, etc. Since that is a quick release, it is easy to remove/put on as required.

If you are planning to start commuting to work on your bike, don’t let the list of things put you off from starting. I started with just the bike and added in all these things one by one over the last year. Do you commute to work on your bike? What gear do you use and find helpful? If you do not try to do at least once a week and soon you will enjoy it like me!

A few months back, I got a used Jaybird run headphones to listen to music while running. I enjoy running with those, and they are a perfect fit. It has never dropped off during my runs. However, after a couple of weeks, the right earbuds stopped charging. Here are a few tips that help me get it charged every time - yes need to do one of these or even a combination to get it to charge every time I pop them back on the case.


 

  • Remove the ear tips and open/close lots of times until it decides to charge
  • Clean the tips and the charging case points with an earbud
  • Blow hard into the small gap in the charging case, where the lid locks in. I assume this is happening because of dust accumulating inside the case. Doing this fixes the charging issue in one or two tries and has been the fastest way to get it to charge.

  I love these earphones, but it’s a pain to get them to charge - Hope this helps.

Exercism: A Great Addition To Your Learning Plan

“For the things we have to learn before we can do them, we learn by doing them.” - Aristotle

Learning by doing has been proven effective and is the same when learning a new programming language. However, it is also equally important that you learn the right things and in the correct manner. Having a mentor or a goto person in the field that you are leaning is essential. Exercism brings both together under one place and free of cost.

Exercism is an online platform designed to help improve your coding skills through practice and mentorship.

Exercism is a great site to accompany your journey in learning something new.

  • It has a good set of problems to work your way through the language.
  • A good breakdown of the language concepts covered through the exercises. Exercism gives an overview on what to concentrate next.
  • The best part is the mentor feedback system. Each time you submit a main/core exercise, there is an approval system in place. To me, learning a language is more about understanding the language-specific features and writing more idiomatic code.

Exercism is entirely open source and open to your contributions as well. It takes a lot of effort to maintain something of this standard — also a great effort from the mentors by providing feedback.

Feedback is useful, especially when moving across programming paradigms - like between OO and Functional.

Solving a problem is just one piece of the puzzle. Writing idiomatic code is equally important, especially when learning a new language. Feedbacks such as the above helps you think in the language (in my case, F#).

Don’t wait for feedback approval, but work on side exercises. Twitter or slack channels are also a great way to request feedback. Reading along with a book helps if you need more structured learning. For F# I have been reading the book Real World Functional Programming

I discovered Exercism while learning FSharp. It has definitely helped keep me in the loop. Hope it helps you!

← Previous Posts