A while back I had written about various one-day trip options around Sydney. Here is a list of places that we traveled around Sydney during long weekend breaks with a day or two overnight stays.

TLDR;

Coffs Harbour

Coffs Harbour is one of the places that I liked the most of all my trips. It’s been almost a year since I made my trip and the memories are still fresh. The beaches are great, especially the Jetty Beach. The rainforest walk in Dorrigo was the best I have had to date, especially because of the rain the night before. Coffs Harbour is perfect for a 3-4 days trip and there are a lot of places to visit around.

Jetty Beach, Coffs Harbour

Port Macquarie

We headed off to Port Macquarie to celebrate Gauthams birthday. Gautham likes strawberries a lot which was why we chose Port Macquarie. Ricardoes Tomatoes & Strawberries is located just ten minutes north of Port Macquarie and provides a unique experience for picking your own strawberries. You can spend around 2-3 hours here and make sure you don’t miss the scones from the cafe. Port Macquarie is also a great place for whale watching and we headed off on an early morning trip to be with the whales. The boat ride (PortJet) in itself is an experience and to our luck, we were able to see around 3 whales up close. We also went to Dooragan National Park, Kattang, Perpendicular Point and Charles Hamsey lookout.

Whale watching and Strawberry picking, Port Macquarie

Grand Pacific Drive

The Grand Pacific Drive makes a great one day trip as well as a multi-day trip for those who want to take their time along this stretch of land. Starting from Royal National Park and stretching all the way to Sapphire Coast, this makes a great drive with beautiful scenery and also a lot of places to visit. The Grand Pacific Drive site has all the details that you need to plan your trip. It also has a trip planner that makes planning easier. If you want to cover most of the places along the way during a single trip, it is best to give it 2-3 days. During my trip, I stopped over at Wollongong and only made my way till Kiama.

Grand Pacific Drive

Blue Mountains

Just 90 minutes from Sydney by car, the Blue Mountains has a lot of attractions worth visiting, making it a good place for an extended weekend trip. Wentworth Falls, Echo Point, and Three Sisters are some of the popular lookouts. Scenic World offers some good rides and entertainment for kids. I liked the worlds steepest incline railway ride in particular. The entry tickets are a bit overpriced though.

Three Sisters, Blue Mountains

Jenolan Caves is another one hour drive from the Blue Mountains and is a must-do. It’s great for people of all ages and if you have kids they will love it. Make sure you check the different cave options and choose one that fits the people in your group. Booking a spot in advance might help and make sure you arrive on time. The drive up there might be a bit slower so give enough buffer time before your cave walk starts.

Canberra

Unlike Sydney, Canberra is a planned city and you can tell that from the moment you enter it. It’s a beautiful little city with lots of variety of things to visit. We started off with the Cockington Green Gardens followed by the National Dinosaur museum. You can spend almost half a day with these and try out the Hamlet, Food Trucks. The Parliment House and Australian War Memorial is also worth visiting. If you time your visit during September-October you can also see the Floriade - the tulip flower festival.

Floriade Tulip Festival, Canberra

Nelson Bay, Hunter Valley, Orange, Port Stephens etc are some of the places that are on our list but could not make it yet. I moved over to Brisbane end of last year and not sure when I will have another chance to explore more around Sydney. But I have new places to look forward to now - Exploring Brisbane!

I was given a console application written in .NET Core 2.0 and asked to set up a continuous deployment pipeline using TeamCity and Octopus Deploy. I struggled a bit with some parts, so thought it’s worth putting together a post on how I went about it. If you have a better or different way of doing things, please shout out in the comments below.

At the end of this post, we will have a console application that is automatically deployed to a server and running, anytime a change is pushed to the associated source control repository.

Setting Up TeamCity

Create a New Project and add a new build configuration just like you would for any other project. Since the application is in .NET Core, install the .NET CLI plugin on the TeamCity server.

Build Steps to build .Net Core

The first three build steps use the .NET CLI to Restore, Build and Publish the application. Thee three steps restore the dependencies of the project, builds it and publishes all the relevant DLL’s into the publish folder.

The published application now needs to be packaged for deployment. In my case, deployments are managed using Octopus Deploy. For .NET projects, the preferred way of packaging for Octopus is using Octopack. However, OctoPack does not support .NET Core projects. The recommendation is to either use dotnet pack or Octo.exe pack. Using the latter I have set up a Command Line build step to pack the contents of the published folder into a zip (.nupkg) file.

1
octo pack --id ApplicationName --version %build.number% --basePath published-app

The NuGet package is published to the NuGet server used by Octopus. Using the Octopus Deploy: Create Release build step, a new release is triggered in Octopus Deploy.

Setting Up Octopus Deploy

Create a new project in Octopus Deploy to manage deployments. Under the Process tab, I have two steps - one to deploy the Package and another to start the application.

Octopus Deploy Process Steps

For the Deploy Package step I have enabled Custom Deployment Scripts and JSON Configuration variables. Under the pre-deployment script, I stop any existing .NET applications. If multiple .NET applications are running on the box, select your application explicitly.

Pre Deployment Script
1
Stop-Process -Name dotnet -Force -ErrorAction SilentlyContinue

Once the package is deployed, the custom script starts up the application.

Run App
1
2
cd C:\DeploymentFolder
Start-Process dotnet .\ApplicationName.dll

With all that set, any time a change is pushed into the source control repository, TeamCity picks that up, build and triggers a deployment to the configured environments in Octopus Deploy. Hope this helps!

Often when working with SQL queries, I come across the need to capitalize SQL keywords across in a large query. For, e.g., to capitalize SELECT, WHERE, FROM clauses in an SQL query. When it is a large query/stored procedure, it is faster done using some text editor. Sublime Text is my preferred editor for such kind of text manipulations.

Sublime Text Editor comes with a few built-in text casing converters that we can use, to convert text from one case to another. Using the simultaneous editing feature, we can combine it with case conversion and manipulate large documents easily.

Convert case options in sublime text

For example, let’s say I have this below SQL query. As you can notice the SELECT and FROM keywords are cased differently across the query.

1
2
3
4
5
select * From Table1
select * From Table2
select * From Table3
SELECT * FROM Table4
Select * From Table5

To standardize this (preferably capitalize all), highlight one of the ‘select’ keywords and highlight all occurrences of the keyword (ALT + F3). Once all occurrences of ‘select’ is highlighted, bring up the command pallete (CTRL + SHIFT + P on windows) and search for ‘Convert Case’. From the options listed choose the case that you want to convert. All selected occurrences of the keyword will now be in the selected case.

Hope this helps you when you have a lot of text case manipulations to be done.

Posts per month - 2016

A year has gone by so fast, and it is again time to do a year review.

TLDR;

2017 was the transformation year. Regular exercise and healthy eating helped loose around 20 kilos. Lots of travel and blogging made it an excellent year. Reading, Photography and Open source did not go that great. Looking forward to 2018!

What went well

Blogging

It has been both good and a bad year as far as this blog. Including the ‘Tip of the Week’ series I wrote seventy-six posts this year with an average of over six posts per month. This is the good part, as it is well past the goal of a minimum four posts a month goal set last year. But looking at the actual posts per month graph below, it is clear that I have fallen short of it on an actual month by month basis. Up until August, I had a steady stream of posts coming in, from when it started dropping down, with even months (November) with no posts. Mainly it’s my laziness to blame, but I can also tell reasons like Vietnam trip, Shifting to Brisbane, etc.

Posts per month in the year 2017

Running

I had started running towards the end of December 2016. One of my goals for 2017 was running, and it has had a good improvement. Ran over 750 kilometers including a half marathon. I am yet to participate in any running events and am planning to in the coming year. I have also started cycling, and it is an excellent way to cross train.

Year in Sport

Travel

Did our first international holiday to Vietnam for ten days and was a great experience. Also went around Australia visiting Blue Mountains, Canberra, Port Macquarie, Brisbane and lots of one-day trips around Sydney. Mandarin Picking, Strawberry picking and Whale watching were some of the top activities for the year.

Strawberry Picking, Ricardoes

What didn’t go well

  • Reading Had set out with a goal of 21 books but ended up finishing only ten books. Of the books I read liked Mindset and How to Win Friends and Influence People, was the best.

  • Photography One trip every three months and post photos were one another goal. The travel part went good (see above), but my DSLR always remained in the bag. Except for a few pictures on the phone camera, there was not much photography done.

  • FSharp FSharp was again on and off this year. Apart from a small utility that I created for Todoist, I did not do much F# work.

Goals for 2018

  • Blogging Stick to 4 posts a month. Need to get back on schedule.

  • Running Attend few running events. Run a marathon.

  • Swimming Having started cycling along with running, has got me thinking about a triathlon. The only thing between is swimming, and I have no clue how to swim. Learning to swim is one of the key goals for the upcoming year. Target is to be able to swim one km.

  • Open Source Start working on a side project. Need to find a matching project first.

  • Reading Read 15 books

Wishing you all a Happy and Prosperous New Year!

I recently upgraded to a Garmin Fenix 3 HR from my Forerunner 630. After a few runs with the Fenix 3, I realized that in Training Mode it does not do auto lap. I have a custom training workout for a 10k with no repeat modes in it. This workout was what I used on my FR630, and it used to auto lap at 1km. That no longer happens in the Fenix 3 HR.

Software Details

Fenix 3 HR: 4.70

Forerunner 630: 7.50(bdd586f)

Garming Auto Lap Not working in Training Mode, Fenix 3 HR

After googling around, I understood the auto lap under training mode is a feature only available to specific models/software versions. One of the reasoning behind it is auto lap might create issues if people are training in intervals larger than 1km. Breaking into laps at every 1km will make it harder/nearly impossible to compare their intervals. For workouts that you want auto lap at 1km (or at any custom distances), you can use Repeat feature as shown below. Setting up the workout as 10 x 1km helps to analyze the run at 1km intervals.

Garming Auto Lap Using Repeat, Fenix 3 HR

Depending on the model/software version of your Garmin watch you might have to tweak your workout plans. Hope this helps

Scheduling

At one of my clients, they had a requirement of scheduling various rules to sent our alert messages via SMS, Email, etc. A Rule consists of below and a few other properties

  • Stored Procedure: The Stored Procedure (yes you read it correctly) to check if an alert needs to be raised
  • Polling Interval: The time interval in which a Rule needs to be checked.
  • Cool-Off Period: Time to wait before running Rule again after an alert was raised.

All Rules are stored in a database. New rules can be added and existing ones updated via an external application. Since the client is not yet in the Cloud, using any of Azure Functions, Lambda, Web Jobs, etc. are out of the question. It needs to be a service running on-premise, so I decided to keep it as a Windows service.

1
2
3
4
5
6
7
8
9
 public class Rule
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string StoredProc { get; set; }
    public TimeSpan PollingInterval { get; set; }
    public TimeSpan CoolOffPeriod { get; set; }
    ...
}

Because of my past good experiences with HangFire I initially set off using that only to discover soon that it can schedule jobs only to the minute level. Even though this is a feature that has been discussed for a long time, it’s yet to be implemented. Since some of the rules are critical to the business, they want to be notified as soon as possible. This means having a polling interval in seconds for those rules.

After reaching out to my friends at Readify, I decided to use Quartz.net. Many had good experiences using it in the past and recommended it highly. One another option that came up was FluentScheduler. There was no particular reason to go with Quartz.net.

Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large-scale enterprise systems.

Setting up and getting started with Quartz scheduler is fast and easy. The library has a well-written documentation. You can update the applications configuration file to tweak various attributes of the scheduler.

App/Web.config file
1
2
3
4
5
6
7
8
9
<configuration>
  <configSections>
    <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" />
  </configSections>
  <quartz>
    <add key="quartz.scheduler.instanceName" value="TestScheduler" />
    <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" />
  </quartz>
</configuration>

The RAMJobStore indicates the store to use for storing job. There are other job stores available if you want persistence of jobs anytime the application restarts.

Setting Up Jobs

Basically, there are three jobs - Alert Job, CoolOff Job, and Refresh Job - set up for the whole application. The Alert and Refresh Jobs are scheduled on application start. The CoolOff Job is triggered by the Alert Job as required. Any data that is required by the job is passed in using JobDataMap.

Schedule an Alert Job
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
...
var job = JobBuilder.Create<AlertJob>()
    .WithIdentity(rule.GetJobKey())
    .WithDescription(rule.Name)
    .SetJobData(rule)
    .Build();

var trigger = TriggerBuilder
    .Create()
    .WithIdentity(rule.GetTriggerKey())
    .StartNow()
    .WithSimpleSchedule(a => a
        .WithIntervalInSeconds((int)rule.PollingInterval.TotalSeconds)
        .RepeatForever())
    .Build();

scheduler.ScheduleJob(job, trigger);

Alert Jobs

The Alert Job is responsible for checking the stored procedure and sending the alerts if required. If an alert is sent, it starts the CoolOff Job and pauses the current job instance. THe DisallowConcurrentExecution prevents multiple instances of the Job having the same key does not execute concurrently. We explicitly set the Job Key based on the Rule Id. This prevents any duplicate messages getting sent out if any of the job instances takes more time to execute than its set polling interval.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[DisallowConcurrentExecution]
public class AlertJob : Job
{
    public void Execute(IJobExecutionContext context)
    {
        var alert = context.GetRuleFromJobData();
        var message = GetAlertMessage(alert);
        if(message != null)
        {
            SendMessage(message);
            CoolOff(alert);
        }
    }

    public void CoolOff(Rule rule)
    {
        var job = JobBuilder.Create<CoolOffJob>()
            .WithIdentity(jobKey)
            .WithDescription(rule.MessageTitle)
            .SetJobData(rule)
            .Build();

        var trigger = TriggerBuilder
            .Create()
            .WithIdentity(rule.GetCoolOffTriggerKey())
            .StartAt(rule.GetCoolOffDateTimeOffset())
            .Build();

        scheduler.PauseJob(rule.GetJobKey());
        scheduler.ScheduleJob(job, trigger);
    }
    ...
}

Cool-Off Job

Cool-Off Jobs is a one time job scheduled by the Alert Job after an alert is sent successfully. The CoolOff job is scheduled to start after the Cool-Off time as configured for the alert. This triggers the job only after the set amount of time. It Resumes the original Rule Job to continue execution.

1
2
3
4
5
6
7
8
public class CoolOffJob : IJob
{
    public void Execute(IJobExecutionContext context)
    {
        var alert = context.GetRuleFromJobData();
        ScheduleHelper.ResumeJob(alert);
    }
}

Refresh Job

The Refresh Job is a recurring job, that polls the database for any changes to the Rules themselves If any change is detected,it removes the existing schedules for the alert and adds the updated alert job.

1
2
3
4
5
6
7
8
9
[DisallowConcurrentExecution]
public class RefreshJob : IJob
{
    public void Execute(IJobExecutionContext context)
    {
        var allRules = GetAllRules();
        ScheduleHelper.RefreshRules(allRules);
    }
}

With these three jobs, all the rules get scheduled at the start of the application and run continuously. Anytime a change is made to the rule itself, the Refresh Job refreshes it within the time interval that it is scheduled for.

Tip:If there are a lot of rules with the same polling interval it will be good to stagger their starting time using a delayed start per job instance. Doing that will make sure that all jobs do not get polled for at the same time.

So far I have found the Quartz library stable and reliable and have not faced any issues with it. The library is also quite flexible and adapts well to the different needs.

Hope this helps. Merry Xmas!

I was recently playing around with MessageMedia API trying to send SMS and get the status of the SMS sent. Sending the SMS and getting the status of the last sent SMS always happened in succession when testing it manually. Once I send the message, I waited for the API response, grabbed the message id from the response and used that to form the get status request.

Postman is a useful tool if you are building or testing APIs. It allows to create, send and manage API requests.

Postman Chaining Requests

I added two requests and saved it to a collection in Postman - one to Send Message and other to Get Message status. I have created an environment variable for holding the message id. For the request that sends a message, the below Test snippet is added. It parses the response body of the request and extracts the message id of the last send message. This is then saved to the environment variable. The Test snippet is always run after performing the request.

1
2
3
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("messageId", jsonData.messages[0].message_id);
tests["Success"]= true;

The Get message request uses the messageId from the environment variables to construct its URL. The URL looks like below.

1
https://api.messagemedia.com/v1/messages/{{messageId}}

When executing this request, it fetches the messageId from the environment variable, which is set by the previous request. You no longer have to copy message id manually and use it in the URL. This is how we chain the data from one request to another request. Chaining requests is also useful in automated testing using Postman. Hope this helps!

At times you might need to extract data from a large text. Let’s say you have a JSON response, and you want to extract all the id fields in the response and combine them as comma separated. Here’s how you can easily extract data from large text using Sublime (or any other text editor that supports simultaneous editing).

https://jsonplaceholder.typicode.com/posts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[
  {
    "userId": 1,
    "id": 1,
    "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
    "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
  },
  {
    "userId": 1,
    "id": 2,
    "title": "qui est esse",
    "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
  },
  ...
]

Again the key here is to select the recurring pattern first. In this case, it is “id”: and then selecting all occurrences of that. Once all occurrences are selected, we can select the whole line and extract that out. Repeat the same to remove the id text. Then follow the same steps we used to combine text.

Hope this helps you to extract data from large text files.

How do you secure the access keys to the Key Vault itself?.

If you use ClientId/Secret to authenticate with a key vault, then you are likely to end up having these in the web.config file (there still are ways around) which is what we initially set out to avoid, by using Azure Key Vault. The recommended approach till now was to use certificate-based authentication so that you need to have only the thumbprint id of the certificate in the web.config and you can deploy the certificate along with the application. If you are not familiar with either way of authenticating with Key Vault, then check out this article. With the Secret or certificate-based authentication, we also run into the problem of credentials expiring which in turn can lead to application downtime.

Managed Service Identity (MSI) solves this problem by allowing an Azure App Service, Azure Virtual Machines or Azure Functions to connect to Key Vault (and a few other services) without any explicit credentials in the code.

Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

MSI can be enabled through the Azure Portal. E.g., to enable MSI for App Service, the portal has an option as shown below.

Enable Managed Service Identity for Azure App Service

Once enabled we can add an access policy in the key vault to give permissions to the Azure App service. Search by the app service name and assign the required access policies.

For an application to access the key vault, we need to use AzureServiceTokenProvider from Microsoft.Azure.Services.AppAuthentication NuGet package. Instead of using the ClientCredential or ClientAssertionCertificate to acquire the token, we will use AzureServiceTokenProvider to acquire the token for us.

1
2
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

The AzureServiceTokenProvider class tries the following methods to get an access token:-

  1. Managed Service Identity (MSI) - for scenarios where the code is deployed to Azure, and the Azure resource supports MSI.
  2. Azure CLI (for local development) - Azure CLI version 2.0.12 and above supports the get-access-token option. AzureServiceTokenProvider uses this option to get an access token for local development.
  3. Active Directory Integrated Authentication (for local development). To use integrated Windows authentication, your domain’s Active Directory must be federated with Azure Active Directory. Your application must be running on a domain-joined machine under a user’s domain credentials.

Local Development

For the AzureServiceTokenProvider to work locally we need to install the Azure CLI and also setup an environment variable - AzureServicesAuthConnectionString. Depending on whether you want to use ClientId/Secret or ClientId/Certificate-based authentication the value for the environment variable changes.

1
2
3
AzureServicesAuthConnectionString to RunAs=App;AppId=AppId;TenantId=TenantId;AppKey=Secret.
Or
RunAs=App;AppId=AppId;TenantId=TenantId;CertificateThumbprint=Thumbprint;CertificateStoreLocation=CurrentUser

Get Tenant Id and AppId

As shown above, you can get the TenantId and AppId from the App Registrations page in the Azure portal. Clicking on the Endpoints button reveals a list of URL’s which has the TenantId GUID. The AppId is displayed against each of the AD application. Once you set the environment variable, the application will be able to connect to Key Vault without any additional configuration entries in web/app config.

Azure Managed Service Identity makes it easier to connect to Key Vault and removes the need of having any sensitive information in the application configuration file. It also helps remove the overhead of renewing the certificate/secrets used to connect to the Vault. One less thing to worry about the application!

As a developer, I often end up needing to manipulate text. Sometimes this text can get quite large, and it might take a while to do it manually. If you have a text editor under your tool belt, it often helps in situations like that. Let’s looks at one of the common scenarios that I come across and how we can solve that using a text editor. I use Sublime Text as my go-to editor for such text editing hacks, but you can do this in any text editor that supports simultaneous editing.

Let’s say I just get a list of comma separated values and need to insert double (or single) quotes around each value to use in a SQL query. To demonstrate this, I ended up going to random.org to generate a list of random values and had to use the same technique that I was to demonstrate as in the SQL query case. I generated 12 random numbers, and the site gave a tab separated list of values, as shown below.

1
2
3
91    66    31    11    90
80    1    24    48    61
61    66

I now need to convert this into a comma-separated list. Let’s see how we can go about doing this.

  1. Select the recurring character pattern. In this case, it is the tab space.
  2. Select all occurrences of the pattern. (Alt + F3 - Find All in Sublime)
  3. Act on all the occurrences. In this case, I want to remove them, so I use Del
  4. Since I want to introduce a comma between each of the numbers, I first split them into multiple lines using Enter. Now I have all the numbers on a separate line.
  5. Select all the numbers and insert a cursor at the end of each. ( Ctrl + Shift + L)
  6. Insert comma. We still have the cursor at the end of all lines, so just pressing Delete again combines all the lines into one. Remove the trailing comma.

Though this is a specific example, I hope you get the general idea on how to go about manipulating text, to split and combine as required. I hope you will be able to insert double (or single) quotes around each value in the comma separated values that we have now, to use in a SQL query!