I was recently playing around with MessageMedia API trying to send SMS and get the status of the SMS sent. Sending the SMS and getting the status of the last sent SMS always happened in succession when testing it manually. Once I send the message, I waited for the API response, grabbed the message id from the response and used that to form the get status request.

Postman is a useful tool if you are building or testing APIs. It allows to create, send and manage API requests.

Postman Chaining Requests

I added two requests and saved it to a collection in Postman - one to Send Message and other to Get Message status. I have created an environment variable for holding the message id. For the request that sends a message, the below Test snippet is added. It parses the response body of the request and extracts the message id of the last send message. This is then saved to the environment variable. The Test snippet is always run after performing the request.

1
2
3
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("messageId", jsonData.messages[0].message_id);
tests["Success"]= true;

The Get message request uses the messageId from the environment variables to construct its URL. The URL looks like below.

1
https://api.messagemedia.com/v1/messages/{{messageId}}

When executing this request, it fetches the messageId from the environment variable, which is set by the previous request. You no longer have to copy message id manually and use it in the URL. This is how we chain the data from one request to another request. Chaining requests is also useful in automated testing using Postman. Hope this helps!

At times you might need to extract data from a large text. Let’s say you have a JSON response, and you want to extract all the id fields in the response and combine them as comma separated. Here’s how you can easily extract data from large text using Sublime (or any other text editor that supports simultaneous editing).

https://jsonplaceholder.typicode.com/posts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[
  {
    "userId": 1,
    "id": 1,
    "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
    "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
  },
  {
    "userId": 1,
    "id": 2,
    "title": "qui est esse",
    "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
  },
  ...
]

Again the key here is to select the recurring pattern first. In this case, it is “id”: and then selecting all occurrences of that. Once all occurrences are selected, we can select the whole line and extract that out. Repeat the same to remove the id text. Then follow the same steps we used to combine text.

Hope this helps you to extract data from large text files.

How do you secure the access keys to the Key Vault itself?.

If you use ClientId/Secret to authenticate with a key vault, then you are likely to end up having these in the web.config file (there still are ways around) which is what we initially set out to avoid, by using Azure Key Vault. The recommended approach till now was to use certificate-based authentication so that you need to have only the thumbprint id of the certificate in the web.config and you can deploy the certificate along with the application. If you are not familiar with either way of authenticating with Key Vault, then check out this article. With the Secret or certificate-based authentication, we also run into the problem of credentials expiring which in turn can lead to application downtime.

Managed Service Identity (MSI) solves this problem by allowing an Azure App Service, Azure Virtual Machines or Azure Functions to connect to Key Vault (and a few other services) without any explicit credentials in the code.

Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

MSI can be enabled through the Azure Portal. E.g., to enable MSI for App Service, the portal has an option as shown below.

Enable Managed Service Identity for Azure App Service

Once enabled we can add an access policy in the key vault to give permissions to the Azure App service. Search by the app service name and assign the required access policies.

For an application to access the key vault, we need to use AzureServiceTokenProvider from Microsoft.Azure.Services.AppAuthentication NuGet package. Instead of using the ClientCredential or ClientAssertionCertificate to acquire the token, we will use AzureServiceTokenProvider to acquire the token for us.

1
2
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

The AzureServiceTokenProvider class tries the following methods to get an access token:-

  1. Managed Service Identity (MSI) - for scenarios where the code is deployed to Azure, and the Azure resource supports MSI.
  2. Azure CLI (for local development) - Azure CLI version 2.0.12 and above supports the get-access-token option. AzureServiceTokenProvider uses this option to get an access token for local development.
  3. Active Directory Integrated Authentication (for local development). To use integrated Windows authentication, your domain’s Active Directory must be federated with Azure Active Directory. Your application must be running on a domain-joined machine under a user’s domain credentials.

Local Development

For the AzureServiceTokenProvider to work locally we need to install the Azure CLI and also setup an environment variable - AzureServicesAuthConnectionString. Depending on whether you want to use ClientId/Secret or ClientId/Certificate-based authentication the value for the environment variable changes.

1
2
3
AzureServicesAuthConnectionString to RunAs=App;AppId=AppId;TenantId=TenantId;AppKey=Secret.
Or
RunAs=App;AppId=AppId;TenantId=TenantId;CertificateThumbprint=Thumbprint;CertificateStoreLocation=CurrentUser

Get Tenant Id and AppId

As shown above, you can get the TenantId and AppId from the App Registrations page in the Azure portal. Clicking on the Endpoints button reveals a list of URL’s which has the TenantId GUID. The AppId is displayed against each of the AD application. Once you set the environment variable, the application will be able to connect to Key Vault without any additional configuration entries in web/app config.

Azure Managed Service Identity makes it easier to connect to Key Vault and removes the need of having any sensitive information in the application configuration file. It also helps remove the overhead of renewing the certificate/secrets used to connect to the Vault. One less thing to worry about the application!

As a developer, I often end up needing to manipulate text. Sometimes this text can get quite large, and it might take a while to do it manually. If you have a text editor under your tool belt, it often helps in situations like that. Let’s looks at one of the common scenarios that I come across and how we can solve that using a text editor. I use Sublime Text as my go-to editor for such text editing hacks, but you can do this in any text editor that supports simultaneous editing.

Let’s say I just get a list of comma separated values and need to insert double (or single) quotes around each value to use in a SQL query. To demonstrate this, I ended up going to random.org to generate a list of random values and had to use the same technique that I was to demonstrate as in the SQL query case. I generated 12 random numbers, and the site gave a tab separated list of values, as shown below.

1
2
3
91    66    31    11    90
80    1    24    48    61
61    66

I now need to convert this into a comma-separated list. Let’s see how we can go about doing this.

  1. Select the recurring character pattern. In this case, it is the tab space.
  2. Select all occurrences of the pattern. (Alt + F3 - Find All in Sublime)
  3. Act on all the occurrences. In this case, I want to remove them, so I use Del
  4. Since I want to introduce a comma between each of the numbers, I first split them into multiple lines using Enter. Now I have all the numbers on a separate line.
  5. Select all the numbers and insert a cursor at the end of each. ( Ctrl + Shift + L)
  6. Insert comma. We still have the cursor at the end of all lines, so just pressing Delete again combines all the lines into one. Remove the trailing comma.

Though this is a specific example, I hope you get the general idea on how to go about manipulating text, to split and combine as required. I hope you will be able to insert double (or single) quotes around each value in the comma separated values that we have now, to use in a SQL query!

Electron is a great way to build cross-platform desktop applications using HTML, CSS and JavaScript. I was surprised when I first came across Electron to see many of the applications that I use daily was developed in electron and I never knew about it. Since then I was interested in learning more about developing an application using Electron. Recently I was playing around with an idea for a side project and decided to use Electron as I wanted a desktop application. TDK react

Setting up the React Application

With the create-react-app template generator, it is easy to setup and get up and running a react application. All you need to run are the below commands, and you have everything set up for a react application.

1
2
3
4
5
npm install -g create-react-app

create_react_app electron_react
cd electron_react
npm start

The above commands will create an ‘electron-react’ folder with all the code and set up the app at *http://localhost:3000* in development mode.

Setting up Electron

Now that we have a react application setup, let us integrate electron with it. The below command installs electron package.

1
npm install electron

A basic Electron application needs just these files:

  • package.json - Points to the app’s main file and lists its details and dependencies. We already have this as part of the react application.
  • main.js - Starts the app and creates a browser window to render HTML. This is the app’s main process. We will add this file.
  • index.html - A web page to render. This is the app’s renderer process. We already have this as part of the react application.

Let’s start by adding a main.js file. We will keep the code to the bare minimum. All we are doing here is adding a function createWindow which uses BrowserWindow from the electron package, to create a new window instance. The window loads the development server URL. We will modify this URL later to run independently without a hosted server so that it can be packaged and deployed easily. The app’s ready event is wired to create the new window.

main.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
const {app, BrowserWindow} = require('electron');

let mainWindow;

function createWindow(){
    mainWindow = new BrowserWindow({ width: 800, height: 600});
    const startUrl = process.env.DEV_URL;

    mainWindow.loadURL(startUrl);

    mainWindow.on('closed', () => mainWindow = null);
}

app.on('ready', createWindow);

After updating the package.json with the electron application main entry point, we are all set to run the application.

package.json
1
"main": "src/main.js",

Fire up two consoles and launch the react application in one using npm start and the electron application in the other using ‘set DEV_URL=http://localhost:3000 && electron .’

Electron React

Setting up for Deployment

Opening up two consoles and starting up the react server first will start becoming a pain soon. To avoid this, we can use two npm packages to start both the tasks one after the other.

  • concurrently: Run multiple commands concurrently.
  • wait-on: Wait for files, ports, sockets, http(s) resources to become available

Install both the packages and modify the package.json as shown below.

package.json
1
2
3
"react-start": "react-scripts start",
"electron-dev": "set DEV_URL=http://localhost:3000 && electron .",
"start": "concurrently \"npm run react-start\" \"wait-on http://localhost:3000/ && npm run electron-dev\""

Running npm start now launches the react application, waits for the server to be up and running and then launches the electron application.

The electron app depends on the react application being hosted locally to run. Let’s update main.js so that it can run from the generated output of the react application. Running npm build generates the website contents into the build folder.

main.js
1
2
3
4
5
6
7
8
9
10
11
12
...
const path = require('path');
const url = require('url');
...
 const startUrl = process.env.DEV_URL ||
    url.format({
      pathname: path.join(__dirname, '/../build/index.html'),
      protocol: 'file:',
      slashes: true
    });
mainWindow.loadURL(startUrl);
...

Set the homepage property in package.json (“homepage”: “./”) to enable relative paths on the generated index.html file. Once this is done, we can generate the site using npm run build and run the electron application using ‘electron .’. This will launch the application from the build folder.

Hope this helps you to jump start with your Electron app development using React.

References

In the previous post, Generating a Large PDF from Website Contents - HTML to PDF, Bookmarks and Handling Empty Pages, we saw how to generate a PDF from HTML and add bookmarks to the generated PDF files. The PDF file generated is for an individual section which now needs to be merged to form a single PDF file. The individual PDF files contain the relevant content for the section and related bookmarks, which needs to be combined into a single PDF file.

One of the important things to keep intact when merging is the document hierarchy. The Sections, Sub-Categories, and Categories should align correctly so that the final bookmark tree and the Table of Contents appear correctly. It is best to maintain the list of individual PDF document streams in the same hierarchy as required. Since we know the required structure right from the UI, this can be easily achieved by using a data structure similar as shown below

1
2
3
4
5
6
7
8
public class DocumentSection
{
    public MemoryStream PDFDocument {get; set;}

    public List<DocumentSection> ChildSections {get; set;}

    ... // Any additional details that you need
}

The above structure allows us to maintain a tree-like structure of the document. The structure is the same as that is provided to the user to select the PDF options. I used the iTextSharp library to merge PDF documents. To interact with the PDF, we first need to create a PdfReader object from the stream. Using the SimpleBookmark class, we can get the existing bookmarks for the PDF.

1
2
var pdfReader = new PdfReader(stream);
ArrayList bookmarks = SimpleBookmark.GetBookmark(pdfReader);

iText representation of bookmarks is a bit complex. It represents them as an ArrayList of Hashtables. The Hashtable has keys like Action, Title, Page, Kids, etc. Kids property represents child bookmarks and is the same ArrayList type. Since it was hard to work with this structure, I created a wrapper class to interact easily with the bookmarks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
public class Bookmark
{
    public Bookmark(
        string title, string destinationType, int pageNumber,
        float xLeft, float yTop, float zZoom)
    {
        Children = new List<Bookmark>();
        Title = title;
        PageNumber = pageNumber;
        DestinationType = destinationType ?? "XYZ";
        XLeft = xLeft;
        YTop = yTop;
        ZZoom = zZoom;
        PageBreak = false;
    }

    ... // Class properties for the constructor parameters

    public ArrayList ToiTextBookmark()
    {
        ArrayList arrayList = new ArrayList
        {
            ToiTextBookmark(this),
        };
        return arrayList;
    }

    private Hashtable ToiTextBookmark(Bookmark bookmark)
    {
        var kids = new ArrayList();
        var hashTable = new Hashtable
        {
            ["Action"] = "GoTo",
            ["Title"] = bookmark.Title,
            ["Page"] = $@"{bookmark.PageNumber} {bookmark.DestinationType} 
                         {bookmark.XLeft} {bookmark.YTop} {bookmark.ZZoom}",
            ["Kids"] = kids,
        };

        foreach (var childBookmark in bookmark.Children)
        {
            kids.Add(ToiTextBookmark(childBookmark));
        }

        return hashTable;
    }
}

Recursively iterating through the list of DocumentSections, I add all the bookmarks to a root Bookmark class. The root bookmark class represents the full bookmark of the PDF file. The PageNumber is offset using a counter variable. The counter variable is incremented by the number of pages in each of PDF section (pdfReader.NumberOfPages) as it gets merged to the bookmark root. This ensures that the bookmark points to the correct bookmark page in the combined PDF file.

The individual documents are then merged by iterating through all the generated document sections. Once done we get the final PDF as a byte array which is returned to the user.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public byte[] MergeSections(List<DocumentSection> documentSections, Bookmark bookmarkRoot)
{
    int pageNumber = 0;
    using (var stream = new MemoryStream())
    {
        var document = new Document();
        var pdfWriter = PdfWriter.GetInstance(document, stream);
        document.Open();
        var pdfContent = pdfWriter.DirectContent;
        MergeSectionIntoDocument(documentSections, document, pdfContent, pdfWriter, pageNumber);
        pdfWriter.Outlines = bookmarkRoot.ToiTextBookmark();
        document.Close();
        stream.Flush();
        return stream.ToArray();
    }
}

private void MergeSectionIntoDocument(
    List<DocumentSection> documentSections,
    Document document,
    PdfContentByte pdfContent,
    PdfWriter pdfWriter,
    int pageNumber)
{
    foreach (var documentSection in documentSections)
    {
        var stream = documentSection.DocumentStream;
        stream.Position = 0;
        var pdfReader = new PdfReader(stream);

        for (var i = 1; i <= pdfReader.NumberOfPages; i++)
        {
            var page = pdfWriter.GetImportedPage(pdfReader, i);
            document.SetPageSize(new iTextSharp.text.Rectangle(0.0F, 0.0F, page.Width, page.Height));
            document.NewPage();
            pageNumber++;
            pdfContent.AddTemplate(page, 0, 0);
            this.AddPageNumber(pdfContent, document, pageNumber);
        }

        if(documentSection.ChildSections.Any())
            MergeSectionIntoDocument(documentSection.ChildSections, document, pdfContent, pdfWriter, pageNumber);
    }
}

To generate a Table of Contents (ToC), we can use the root bookmark information. We need to manually create a PDF page, read the bookmark text and add links to the page with the required font and styling. iText provides API’s to create custom PDF pages.

We are now able to generate a single PDF based on the website contents.

Very often I need to sign forms, receipts, invoices in PDF format and send them across to someone else. Printing the PDF, signing them physically and scanning them back (of course using Office Lens) is how I used to do this until a while back. Since I don’t have a printer at home, I always had to wait till I reach office. Also, I did not like wasting paper and ink just to put a signature.

Adobe PDF reader allows us to ‘Fill and Sign’ documents. Using this option we can add signatures without needing to print them. Follow the below steps to set up your Adobe Reader to sign any document.

1. Sign on a white paper and take a picture. Upload the picture onto your computer and crop the image using your favorite image editor. You should have something similar as shown below.

Your Signature

2. Open the PDF file that you need to sign with Adobe Reader.

3. Open ‘Fill and Sign’ option. You can do this either from the ‘Tools Pane’ (Shift + F4 on windows) or the menu ‘View -> Tools -> Fill and Sign.’

4. Under the Sign option, you can choose a signature image. Choose the image you created before and save.

Add your Signature

You are all set to sign documents now. Anytime you want to sign a document, choose ‘Fill and Sign’ and you will see your signature under the Sign button. Click the signature and place it anywhere on the document that you want to sign. No more printing and scanning them back again.

Scanning physical documents can be cumbersome using a scanner, especially if you do not have easy access to one. Taking pictures with the default camera application on the phone might not give the best of results that you are expecting. Also, you will mostly end up needing to trim such photos of unwanted elements.

Microsoft Office Lens is the perfect application for scanning documents and whiteboards. Office Lens focuses documents in the camera frame and allows you to capture just what is required. It enhances the selected document sections. Below is an example of the highlight and the captured document.

Office Lens Capture

Features

  • Capture and crop a picture of a whiteboard or blackboard, and share your meeting notes with colleagues.
  • Make digital copies of your printed documents, business cards or posters, and trim them precisely.
  • Printed text will be automatically recognized (using OCR) by converting Word and PDF, so you can search for words in images and copy and edit them.

Office Lens is available on all platforms. Download the Android, iOS or the Windows version from the stores. Next time you want to scan something just Office Lens it!

The Headphones Rule

I need some undistracted time.’

This was one of the things that came up in my team’s retrospective yesterday. Having some undistracted time is necessary for getting things done. It’s a good practice to have a consensus among the team members on how to manage disruptions and indicate whether you are open for a chat.

The Headphone Rule is an interesting way to indicate whether a person is open to interactions or not.

no headphones, you can talk to me.

1 headphone, you can talk to me about work

2 headphones, do not talk to me.

For people who do not use a headphone, some other technique needs to be used (like sticky notes, colored lights, etc.). Luckily in my team, everyone uses headphones, and it was an acceptable solution. Irrespective of the way you choose it is important to have some agreed way to indicate whether you are interruptible or not. It helps you and the team to have some undistracted time.

If you are a .NET developer and looking for some awesome free stuff, then check out Visual Studio Dev Essentials. You get loads of free stuff

Free tools, cloud services, and training

Get everything you need to build and deploy your app on any platform. With state-of-the-art tools, the power of the cloud, training, and support, it’s our most comprehensive free developer program ever.

center

Some of the key attractions of the program are

  • $300 Azure Credit for a year
  • Access to Xamarin University Training
  • Pluralsight access for three months
  • WintellectNOW access for three months

All you need is a Windows Live ID to signup. Get it if you have not already!