A few months back, I got a used Jaybird run headphones to listen to music while running. I enjoy running with those, and they are a perfect fit. It has never dropped off during my runs. However, after a couple of weeks, the right earbuds stopped charging. Here are a few tips that help me get it charged every time - yes need to do one of these or even a combination to get it to charge every time I pop them back on the case.


 

  • Remove the ear tips and open/close lots of times until it decides to charge
  • Clean the tips and the charging case points with an earbud
  • Blow hard into the small gap in the charging case, where the lid locks in. I assume this is happening because of dust accumulating inside the case. Doing this fixes the charging issue in one or two tries and has been the fastest way to get it to charge.

  I love these earphones, but it’s a pain to get them to charge - Hope this helps.

Exercism: A Great Addition To Your Learning Plan

“For the things we have to learn before we can do them, we learn by doing them.” - Aristotle

Learning by doing has been proven effective and is the same when learning a new programming language. However, it is also equally important that you learn the right things and in the correct manner. Having a mentor or a goto person in the field that you are leaning is essential. Exercism brings both together under one place and free of cost.

Exercism is an online platform designed to help improve your coding skills through practice and mentorship.

Exercism is a great site to accompany your journey in learning something new.

  • It has a good set of problems to work your way through the language.
  • A good breakdown of the language concepts covered through the exercises. Exercism gives an overview on what to concentrate next.
  • The best part is the mentor feedback system. Each time you submit a main/core exercise, there is an approval system in place. To me, learning a language is more about understanding the language-specific features and writing more idiomatic code.

Exercism is entirely open source and open to your contributions as well. It takes a lot of effort to maintain something of this standard — also a great effort from the mentors by providing feedback.

Feedback is useful, especially when moving across programming paradigms - like between OO and Functional.

Solving a problem is just one piece of the puzzle. Writing idiomatic code is equally important, especially when learning a new language. Feedbacks such as the above helps you think in the language (in my case, F#).

Don’t wait for feedback approval, but work on side exercises. Twitter or slack channels are also a great way to request feedback. Reading along with a book helps if you need more structured learning. For F# I have been reading the book Real World Functional Programming

I discovered Exercism while learning FSharp. It has definitely helped keep me in the loop. Hope it helps you!

My recent project got me back into some long lost technologies, including Excel sheets, vb scripts, Silverlight, bash scripts, and whatnot. Amongst one of the things was a bash script that imported data from different CSV files to a data store. There were 40-50 different CSV schemas mapped to their corresponding tables to import. I had to repoint these scripts to write to a SQL Server.

Challenges with BCP Utility

The bcp Utility is one of the best possible options to use from a command line to bulk import data from CSV files. However, the bcp utility requires the fields (columns) in the CSV data file to match the order of the columns in the SQL table. The columns counts must also match. Both were not the case in my scenario. The way that you can work around this using bcp is by providing a Format File.

  • CSV file columns must match order and count of the SQL table
  • Requires a Format File otherwise
    • Static generation is not extensible
    • Dynamic generation calls for a better programming language

Format File has an XML and a Non-XML variant, that can be pre-generated or dynamically generated using code. I did not want to pre-generate the format file because I do not own the generation of the CSV files. There could be more files, new columns, and order of columns can change. Dynamic generation involved a lot more work and bash scripts didn’t feel like the appropriate choice. These made me to rewrite that code.

SqlBulkCopy

CSharp being my natural choice for programming language and having excellent support for SQLBulkCopy, I decided to rewrite the existing bash script. SQLBulkCopy is the equivalent of bcp utility and allows to write managed code solutions for similar functionality. SQLBulkCopy can only write data to SQL Server. However, any data source can be used as long as it can be loaded into a DataTable instance.

The ColumnMappings property defines the relationship between columns in the data source and the SQL table. It can be dynamically added reading the headers of the CSV data file and adding them. In my case, the names of the column were the same as that of the table. My first solution involved reading the CSV file and splitting data using comma (“,”).

var lines = File.ReadAllLines(file);
if (lines.Count() == 0) 
    return;

var tableName = GetTableName(file);
var columns = lines[0].Split(',').ToList();
var table = new DataTable();
sqlBulk.ColumnMappings.Clear();

foreach (var c in columns)
{
    table.Columns.Add(c);
    sqlBulk.ColumnMappings.Add(c, c); 
}

for (int i = 1; i < lines.Count() - 1; i++)
{
    var line = lines[i];
    // Explicitly mark empty values as null for SQL import to work
    var row = line.Split(',')
        .Select(a => string.IsNullOrEmpty(a) ? null : a).ToArray();
    table.Rows.Add(row);
}

sqlBulk.DestinationTableName = tableName;
sqlBulk.WriteToServer(table);

CSV file may have empty columns where the associated column in SQL table is NULLABLE. SQLBulkCopy throws errors like below, depending on the column type and if the value is an empty string. The given value of type String from the data source cannot be converted to type <TYPENAME> of the specified target column

Explicitly marking the empty values as null (as in the code above) solves the problem.

CSV File Gotchas

The above code worked all file until there were some CSV files which had comma as a valid value for few of the columns, as shown below.

Id,Name,Address,Qty
1,Rahul,"Castlereagh St, Sydney NSW 2000",10

Splitting on comma no longer works as expected, as it also splits the address into two-part. Using a NuGet package for reading the CSV data made more sense and decided to switch to CSVHelper. Even though CSVHelper is intended to work with strong types, it does work with generic types - thanks to dynamics. Generating types for each CSV format is equivalent to generating a format file required for the bcp utility.

List<dynamic> rows;
List<string> columns;
using (var reader = new StreamReader(file))
using (var csv = new CsvReader(reader))
{
    rows = csv.GetRecords<dynamic>().ToList();
    columns = csv.Context.HeaderRecord.ToList();
}

if (rows.Count == 0)
    return;

var table = new DataTable();
sqlBulk.ColumnMappings.Clear();

foreach (var c in columns)
{
    table.Columns.Add(c);
    sqlBulk.ColumnMappings.Add(c, c);
}

foreach (IDictionary<string, object> row in rows)
{
    var rowValues = row.Values
        .Select(a => string.IsNullOrEmpty(a.ToString()) ? null : a)
        .ToArray();
    table.Rows.Add(rowValues);
}

sqlBulk.DestinationTableName = tableName;
sqlBulk.WriteToServer(table);

With the CSVHelper added in I am now successfully able to import the different CSV file formats into their corresponding tables in SQL Server. Do add in the required error handling around the above functionality. Also, take note of the transaction behavior of SQLBulkCopy. By default, each operation is performed as an isolated operation. On failure, no records are written to the database. A managed transaction can be passed in to manage multiple database operations.

Hope this helps!

In the previous post, we saw how to Enable History for Azure DevOps Variable Groups Using Azure Key Vault. However, there is one issue with using this approach. Whenever a deployment is triggered, it fetches the latest Secret value from the Key Vault. This behaviour might be desirable or not depending on the nature of the Secret.


In this post, we will see how we can use an alternative approach using the Azure Key Vault Pipeline task to fetch Secrets and at the same time, allow us to snapshot variable values against a release.

Secrets in Key Vault

Before we go any further, lets looks look at how Secrets looks like in Key Vault. You can create Secret in Key Vault via various mechanisms including Powershell, cli, portal, etc. From the portal, you can create a new Secret as shown below (from the Secrets section under the Key Vault). A Secret is uniquely identifiable by the name of the Vault, Secret Name, and the version identifier. Without the version identifier, the latest value is used.

Since we have only one Secret Version created, this can be identified using the full SecretName/Identifier or just the SecretName as both are the same.

Depending on how you want your consuming application to get a Secret value you should choose how to refer a Secret - using the name only (to always get the latest version) or using the SecretName/Identifier to get the specific version.

Azure Key Vault Pipeline Task

The Azure Key Vault task can be used to fetch all or a subset of Secrets from the vault and set them as variables that is available in the subsequent tasks of a pipeline. Using the secretsFilter property on the task, it supports either downloading all the Secrets (as of their latest version) or specify a subset of Secret names to download. When specifying the names you choose either of the two formats - SecretName or SecretName/VersionIdentifier.

When using the SecretName the Secret is available against the same name in the subsequent tasks as a Variable. With SecretName/Identifier format the Secret is available as a variable with name SecretName/Identifier and also with the name SecretName (excluding the Identifier part).

E.g. If the secretsFilter property is set to ConnectionString/c8b9c1dd4e134118b13568a26f8d9778, two variables will be created after the Azure Key Vault task step - one with name ConnectionString/c8b9c1dd4e134118b13568a26f8d9778 and another one with name ConnectionString.

The application configuration can use either of the two names, mostly ConnectionString. We do not want the application configuration to be tightly coupled with the Secret Identifier in Key Vault.

Azure DevOps Variable Groups

With the Key Vault integration now moved out of the Variable Groups it can either be blank or have the list of SecretNames. For the Azure Key Vault task the variables names can either be passed in as a parameter value specified in the Variable Groups (as shown in the image above) or we can specify the actual names (along with identifiers) as part of the task itself. I prefer to pass it as a variable from the Variable Groups. It can either be on a variable with comma separated values or multiple variables passed as comma separated values in the task. To minimize editing the task on every release I chose to pass it as a single variable with comma separated values (as shown below).

Having the Azure Key Vault task as the first task in the pipeline all subsequent tasks will have the variables from Key Vault available to use - including file transforms and variable substitution options. Using the specific version of the Secret locks down the Secret value at that time to a release created. When the secret value is updated, the variable will need to be updated with the new version identifier for it to use the latest version. If not it will still the older version of the Secret Value. This behaviour is now exactly as with the default variables in the DevOps pipeline - by snapshotting the variable values as part of the release.

The above feature is part of the Azure Key Vault Task as of version 1.0.37.

Using Key Vault via Variable Group does not snapshot configuration values from the vault. When a deployment is triggered the latest value from the Key Vault is used.

A couple of weeks back, I accidentally deleted an Azure DevOps Variable Group at work, and it sucked out half of my day. I was lucky that it was the development environment. Since I had access to the environment, I remoted to the corresponding machines (using the Kudu console) and retrieved the values from the appropriate config files. Even with all access in place, it still took me a couple of hours to fix it. This is not a great place to be.

In this post, let us see how we can link secrets from Key Vault to DevOps Variable Groups and how it helps us from accidental deletes and similar mistakes. Check out my video on Getting started with Key Vault and other related articles if you are new to Key Vault.

Azure Key Vault enables safeguard cryptographic keys and secrets used by the application. It increases security and control over keys and passwords.


Azure DevOps supports linking Secrets from an Azure Key Vault to a Variable Group. When creating a new variable group toggle on the ‘Link secrets from an Azure Key Vault as variables’ option. You can now link the Azure subscription and the associated Key Vault to retrieve the secrets. Clicking the Authorize button next to the Key Vault sets the required permissions on the Key Vault for the Azure Service Connection (that what connects your DevOps account with the Azure subscription).

Azure DevOps Variable Groups and Azure Key Vault

The Add button pops up a dialog as shown below. It allows you to select the Secrets that needs to be available as part of the Variable Group.

Azure DevOps Variable Groups link secrets from Vault

Create an Azure Key Vault for each environment to manage Secrets for that environment. As per the current pricing, creating a Key Vault does not have any cost associated. Cost is based on operations to Key Vault - around USD $0.03/10,000 transactions (at the time of writing.)

Version History for Variable Changes

Key Vault supports versioning and creates a new version of an object (key/secret/certificate) each time it is updated. This helps keep track of previous values. You can set expiry/activation dates on the secret if applicable. Further, by having Expiry Notification for Azure Key Vault set up, you can stay on top of rotating your secrets/certificates on time. The Variable Group refers only to the Secret names in the Key Vault. The secret names are the same as what is in the application configuration file. Every time a release is deployed, it reads the latest value of the Secret from the associated Key Vault and uses that for the deployment. This is different from when defining the variables directly as part of the group, where the variables are snapshot at the time of release.

For every deployment, the latest version of the Secret is read from the associated Key Vault.

Make sure this behavior is acceptable with your deployment scenario before moving to use Key Vault via Variable Groups. However, there is a different plugin that you can use to achieve variable snapshotting even with Key Vault, which I will cover in a separate post.

Handling Accidental Deletes

In case anyone accidentally deletes a variable group in Azure DevOps, it is as simple as cloning one of your other environments and renaming to be Dev Variable group. Mostly it’s the same set of variables across all environments. The actual secret value is not required anymore, as that is managed in Key Vault.

For argument sake, what if I accidentally delete the Key Vault itself?

Good news is Key Vault does have a Recovery Option. Assuming you create the Key Vault with the recovery options set (which you obviously will now), using the EnableSoftDelete parameter from Powershell, you can recover back from any delete action on the vault/key/secret.

Hope this helps save half a day (or even more) of someone (maybe me again) who accidentally deletes a variable group!

When interacting with third-party services over the network, it is good to have a fault handling and resilience strategy in place. Some libraries have built-in capabilities while for others you might have to roll your own.

Below is a piece of code that I came across at one of my clients. It talks to Auth0 API and gets all users. However, Auth0 API has a rate limiting policy. Depending on the API endpoint, the rates limits differ. It also varies based on time and other factors. The HTTP Response Headers contain information on the status of rate limits for the endpoint and are dynamic. The below code does have a constant delay defined between subsequent API calls not to exceed rate limits

public async Task<User[]> GetAllUsers()
{
    var results = new List<User>();
    IPagedList<User> pagedList = null;

    do
    {
        pagedList = await auth0Client.Users.GetAllAsync(
           connection: connectionString,
           page: pagedList?.Paging.Start / PageSize + 1 ?? 0,
           perPage: PageSize,
           includeTotals: true,
           sort: "email:1");

        results.AddRange(pagedList);

        await Task.Delay(THROTTLE_TIME_IN_MS);
    } while (pagedList.Paging.Start + pagedList.Paging.Length < pagedList.Paging.Total);

    return results.ToArray();
}

The delay seems valid when looked in isolation, but when different code flows/apps make calls to the Auth0 API at the same time, this is no longer the case. The logs show that this was the case. There were many Auth0 errors with *429 StatusCode * indicating ‘Too Many Requests’ and ‘Global Rate Limit has reached.’

An obvious fix here might be to re-architect the whole solution to remove this dependency with Auth0 and not make these many API calls in the first place. But let’s accept the solution we have in place and see how we can make it more resilient and fault tolerant. Rate limit Exceptions are an excellent example of transient errors. A transient error is a temporary error that is likely to disappear soon. By definition, it is safe for a client to ignore a transient error and retry the failed operation.

Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.

With Polly added in as a NuGet package, we can define a policy to retry the API request up to 3 times with a 429 status code response. There is also a backoff time added to subsequent requests based on the attempt count and a hardcoded THROTTLE_TIME.

private Polly.Retry.AsyncRetryPolicy GetAuth0RetryPolicy()
{
    return Policy
        .Handle<ApiException>(a => a.StatusCode == (HttpStatusCode)429)
        .WaitAndRetryAsync(
            3, attempt => TimeSpan.FromMilliseconds(
                THROTTLE_TIME_IN_MS * Math.Pow(2, attempt)));
}

The original code to Auth0 using the policy is as below.

...
pagedListResult = await auth0RetryPolicy.ExecuteAsync(() => auth0Client.Users.GetAllAsync(
    connection: connectionString,
    page: pagedListResult?.Paging.Start / PageSize + 1 ?? 0,
    perPage: PageSize,
    includeTotals: true,
    sort: "email:1"));

The calls to Auth0 are now more resilient and fault tolerant. It automatically retries the request if the failure reason is ‘Too Many Requests (429)’. It is an easy win with just a few lines of code. This is just an example of fault handling and retry with the Auth0 API. The same technique can be used with any other services that depend on. You just need to define your own policy and modify the calls to use them. Hope this helps you handle transient errors in your application.

For one of the web application I was working on, access was to be restricted based on user belonging to a particular Azure AD Group. The application as such did not have any Role Based Functionality. It feels an overhead to set up the Role Based Access when all we want is to restrict users belonging to a particular group.

In this post, let us look at how we can set up Azure AD authentication such that only users of a particular group can authenticate against it and get a token. I used the ASP.NET Core 2.1 Azure AD authentication sample, but this applies to any application trying to get a token from an Azure AD application.

Setting up Azure AD Application

In the Azure Portal, navigate to the AD application used for authentication, under Enterprise Applications (from Azure Active Directory). Turn on ‘User Assignment Required’ under Properties for the application (as shown below).

With User Assignment Required turned on, users must first be assigned to this application before being able to access it. When this is not turned on, any user in the directory can access the application.

Once this is turned on if you try to access your application, it will throw an error indicating the user is not assigned to a role for the application.

Adding User to Role

For users to now be able to access the AD application, they need to be added explicitly to the application. This can be done using the Azure Portal under the ‘Users and groups’ section for the AD application (as shown below).

Users can either be added explicitly to the application or via Groups. Adding a group grants access to all the users within the group. In our scenario, we created an AD group for the application and added users that can access the application to the group. If you have different AD applications per environment (which you should), make sure you do this for all the applications.

Handling Managed Service Identity Service Principal

Even though it is possible to add MSI service principal to an Azure AD Group it does not work as intended. The request was failing to get a token with the error that the user is not assigned to a role. It looks like this is one of the cases where a full service principal is required.

To get this working for an MSI service principal, I had to create a dummy application role for the AD application and grant the MSI service principal that role for the AD application. Check the Using AD Role section in this article for full details on setting this up. Note that in this case, you need to explicitly add in the application roles and grant access for the service principal for each of them.

Only users belonging to the group or have been assigned directly to the AD application can get a token for the AD application and hence access the Web application. This is particularly useful when the application does not have any Role Based functionality and all you want is restrict access to a certain group of people within your directory/organization.

Azure Functions are getting popular, and I start seeing them more at clients. One typical scenario I come across is to authenticate an Azure Function with an Azure Web API. Every time something like this comes up, it means more Azure AD applications, which in turn means more secrets/certificates that need to be managed. But with Managed Service Identity (MSI) feature on Azure, a lot of these secrets and authentication bits can be taken off from our shoulders and left to the platform to manage for us.


In this post let us explore how we can successfully authenticate/authorize an Azure Function with a Web API using AD application and Managed Service Identity and still not have any Secrets/certificates involved in the whole process.

Setting Up the Web API

The Azure hosted Web API is set to use Azure AD authentication based on JWT token. To enable this, I have the below code in the Startup class. I created an AD application and ClientId set up as shown below. Any request to the Web API needs a valid token from the Azure AD application in the request header.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(options =>
    {
        var policy = new AuthorizationPolicyBuilder()
            .RequireAuthenticatedUser()
            .Build();
        options.Filters.Add(new AuthorizeFilter(policy));
    }).SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

    services
        .AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
        .AddJwtBearer(options => 
        {
            options.Audience = Configuration["AzureAd:ClientId"];
            options.Authority = 
                $"{Configuration["AzureAd:Instance"]}{Configuration["AzureAd:TenantId"]}";
        });
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseAuthentication();
    app.UseMvc();
}

Enabling MSI on Azure Function

Managed Serviced Identity (MSI) can be turned on through the Azure Portal. Under ‘Platform features’ for an Azure Function select ‘Identity’ as shown below and turn it on for System Assigned.

A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it’s enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.

Once enabled, you can find the added identity for the Azure function under Enterprise Applications list in the AD directory. Azure internally manages this identity.

Authenticating Function with API

To authenticate with the Web API, we need to present a token from the AD application. Any service principal on the AD can authenticate and retrieve token this and so can out Azure Function with the Identity turned on. Usually authenticating with the Azure AD requires a Client ID/Secret or ClientId?Certificate combination. However, with MSI turned on, Azure manages these credentials for us in the background, and we don’t have to manage it ourselves. By using the AzureServiceTokenProvider class from the Microsoft.Azure.Services.AppAuthentication, NuGet package helps authenticate an MSI enabled resource with the AD.

With AzureServiceTokenProvider class, If no connection string is specified, Managed Service Identity, Visual Studio, Azure CLI, and Integrated Windows Authentication are tried to get a token. Even if no connection string is specified in code, one can be specified in the AzureServicesAuthConnectionString environment variable.

To access the API, we need to pass the token from AD application as a Bearer token, as shown below.

var target = "<AD App Id of Web API>";
var azureServiceTokenProvider = new AzureServiceTokenProvider();
string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync(target);

var wc = new System.Net.Http.HttpClient();
wc.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
var result = await wc.GetAsync("<Secured API URL>");

Role-Based Authorization For Azure Function MSI

Now that we have the authentication set up between the Azure Function and Web API, we might want to restrict the endpoints on the API the function can call. It is the typical User Authorization scenario, and we can use similar approaches that apply.

Using AD Role

To add an App Role for the MSI function, we first need to add an ‘Application’ role to the AD Application (one that Web API uses to authenticate against). The allowedMemberTypes does allow comma separated values if you are looking to add the same role for User and Application.

"appRoles": [
        {
            "allowedMemberTypes": [
                "Application"
            ],
            "description": "All",
            "displayName": "All",
            "id": "d1c2ade8-98f8-45fd-aa4a-6d06b947c66f",
            "isEnabled": true,
            "lang": null,
            "origin": "Application",
            "value": "All"
        }
    ]

With the role defined, we can add the MSI Service Principal to the application role using New-AzureADServiceAppRoleAssignment cmdlet.

# TenantId required only if multiple tenant exists for login
Connect-AzureAd -TenantId 'TENANT ID' 
# Azure Function Name (Service Principal created will have same name)
# Check under Enterprise Applications
$msiServicePrincipal = Get-AzureADServicePrincipal -SearchString "<Azure Function Name>" 
# AD App Name 
$adApp = Get-AzureADServicePrincipal -SearchString "<AD App Web API>"

New-AzureADServiceAppRoleAssignment -Id $adApp.AppRoles[0].Id `
     -PrincipalId $msiServicePrincipal.ObjectId `
     -ObjectId $msiServicePrincipal.ObjectId `
     -ResourceId $adApp.ObjectId

Using AD Group

In a previous post, we saw how to use Azure AD Groups to provide role-based access. You can add a Service Principal to the AD group either through the portal or code.

az ad group member add 
    --subscription b3c70d42-a0b9-4730-84a4-b0004a31f7b4 
    -g aa762499-6287-4e28-8753-27e90cfd2738 // ADGroup Id
    --member-id bb8920f3-7a76-4d92-9fff-fc10afa7887a // Service Principal Object Id

To verify that the token retrieved using the AzureServiceTokenProvider has the associated claims, decode the token using jwt.io. In this case, I have added both roles and groups for the MSI service principal, and you can see that below (highlighted).

The Web API can now use these claims from the token to determine what functionality needs to be available for the associated roles. Here is a detailed post on how to do that using claims based on Groups.

We need one less set of authentication keys shipped as part of our application by enabling MSI. The infrastructure layer, Azure, handles this for us, which makes building applications a lot easier. Azure supports MSI for a lot more resources where similar techniques can be applied. Hope this helps to authenticate and authorize the Azure Functions accessing your Web API and also help you in discovering more use cases for using Managed Services Identity (MSI).

References:

Using Azure Managed Service Identities with your apps

How To Take iOS App Store Screenshots Using Google Chrome For Cordova Applications

Using Chrome Browser to take screenshots in all resolutions as required by the App Stores.

When submitting an app to the iOS App Store, it now mandates to upload screenshots for iPhone XS Max or the 12.9-inch iPad Pro. I had just moved to a new client and was to push an update for their existing mobile application. I had none of the iOS app development related applications setup and had to get this done as soon as possible. The mobile app is the website packaged as a Cordova application.

In this post, we will see how we can use Google Chrome Browser to take screenshots in the different resolution that the App Store requires for app submission.

Simulate Mobile Devices in Chrome

Use the Chrome DevTools to simulate mobile devices and load the website in mobile view. In this mode, the devices drop-down list shows all the predefined list of mobile device lists. You can switch between them to see how the site will render on various devices.

Only a subset of available devices is shown in the drop-down. All available devices are listed on selecting the Edit button. Only the devices that are ticked in the list shown are visible in the drop-down. One can edit this to add/remove devices from the drop-down list.

Add Custom Device

The iPhone XS Max is a relatively new device and the device setting is not yet available in the predefined list of devices. This could very well be available at the time of reading; however, there can be another device that you are looking for that that is not in the list. In this case, you can add the device to the list using the ‘Add custom device’ button in the Edit screen that lists all the devices (shown above).

iPhone XS Max has a screen size of 1242px x 2688px. Using the actual pixels might render the page size large for your laptop/monitor. You can reduce the size by a factor, Device Pixel Ratio (DPR), and enter that along with the device details. In the example below I have used a DPR of 3 which makes the width the height smaller - 414px x 896px (12423 = 414px; 26883 = 896px)

Capture Screenshot in Native Resolution

To upload the screenshots to the App Store you need them to be in the native resolution, as you would have taken a screenshot in the actual devices. Since the page rendered on the Mobile Device layout is of a different size you cannot simply take a screen capture of the rendered page, since that will be in a different resolution. To capture a screenshot in the native resolution there are two options

From the options menu drop down (that can be accessed from the three vertical dots button) as shown below you can use the ‘Capture Screenshot’ menu item.

Screenshot option is also available in the Command Menu, that is accessible via using the menu option in the DevTools or Control+Shift+P keyboard shortcut. The list of available commands can be filtered and choose the ‘Capture Screenshot’ command to take a screenshot in native resolution.

The generated screenshots using any of the above methods will be in the actual device resolution, in this case, 1242px x 2688px. The screenshots can be uploaded as is to the App Store and submitted for review.

Hope this helps you to generate screenshots for your mobile applications built using Cordova.

Code Signing MSI Installer and DLLs in Azure DevOps

Code Sign using Microsoft SignTool in Azure Devops build pipeline.

Code Signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed.

Code Signing is something that you need to consider if you are distributing installable software packages that are consumed by your consumers. Some of the most obvious examples would be any of the standalone applications that you install on your machine. Code signing provides authenticity of the software package distributed and ensures that it is not tampered with between when it was created and delivered to you. Code Signing usually involves using a Code Signing Certificate which follows the public/private key pair to sign and verify. You can purchase this from all certificate issuing authorities; google should help you choose one.

At one of the recent projects, I had to set up Code Signing for the MSI installer for a windows service. The following artifacts were to be signed: Windows Service Executable, - Dependent DLLs, MSI Installer (that packages the above). We were using the Azure Devops for our build process and wanted to automate the code signing. We are using WIX to generate the MSI installer; Check out my previous post on Building Windows Service Installer on Azure Devops, if you want more details on setting up WIX installer in DevOps.

Since we need to sign the executable, the DLLs and also the installer that packages them, I do this in a two-step process in the build pipeline - first sign the DLLs and executable, after which the installer project is built and signed. It ensures the artifacts included in the installer are also signed. We self host our build agent, so I installed our Code Signing certificate on the machine manually and added the certificate thumbprint as a build variable in DevOps. For a hosted agent, you can upload the certificate as a Secure File and use it from there.

Sign DLLs

The pipeline first builds the whole project to generate all the DLLs and the service executable. Microsoft’s SignTool is used to sign the DLLs and executable. The tool takes in the certificate’s thumbprint as a parameter and also takes in a few other parameters; check the documentation to see what each what parameter does. It does accept wildcards for the files to be signed. If you follow a convention for project/DLL names (which you should), then signing them all can be done in one command.

c:\cert\signtool.exe sign /tr http://timestamp.digicert.com 
    /fd sha256 /td sha256 /sm /sha1 "$(CodeSignCertificateThumbprint)" 
    /d "My Project description"  MyProject.*.dll

Code Signing Azure DevOps

Sign Installer

Now that we have the DLLs and the executable signed, we need to package them in using the WIX project. By default, building a WIX project rebuilds all the dependent assemblies, which will overwrite the above-signed DLLs. To avoid this make sure to pass in parameters to the build command to not build project references (/p:BuildProjectReferences=false), and only package them. The MSI installer on build output can then be signed using the same tool.

Code Signing Azure DevOps

Sign Powershell Scripts

We also had a few Powershell scripts that were packaged along in a separate application. To sign them you can use the Set-AuthenticodeSignature cmdlet. All you need is to get the certificate from the appropriate store and pass it on to the cmdlet along with the files that need to be signed.

$cert = Get-Childitem cert:\LocalMachine\My   `
    | where {$_.Thumbprint -eq "$(CodeSignCertificateThumbprint)"}[0]
Set-AuthenticodeSignature -TimestampServer "http://timestamp.digicert.com" `
    .\Deploy\*.ps1 $cert 

If you are distributing packaged software to your end users to install it is generally a good idea to Code Sign your artifacts and also publish the verifiable hash on your website along with the downloadables.

I hope this helps!

← Previous Posts