At times you might need to extract data from a large text. Let’s say you have a JSON response, and you want to extract all the id fields in the response and combine them as comma separated. Here’s how you can easily extract data from large text using Sublime (or any other text editor that supports simultaneous editing).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Again the key here is to select the recurring pattern first. In this case, it is “id”: and then selecting all occurrences of that. Once all occurrences are selected, we can select the whole line and extract that out. Repeat the same to remove the id text. Then follow the same steps we used to combine text.
Hope this helps you to extract data from large text files.
How do you secure the access keys to the Key Vault itself?.
If you use ClientId/Secret to authenticate with a key vault, then you are likely to end up having these in the web.config file (there still are ways around) which is what we initially set out to avoid, by using Azure Key Vault. The recommended approach till now was to use certificate-based authentication so that you need to have only the thumbprint id of the certificate in the web.config and you can deploy the certificate along with the application. If you are not familiar with either way of authenticating with Key Vault, then check out this article. With the Secret or certificate-based authentication, we also run into the problem of credentials expiring which in turn can lead to application downtime.
Managed Service Identity (MSI) solves this problem by allowing an Azure App Service, Azure Virtual Machines or Azure Functions to connect to Key Vault (and a few other services) without any explicit credentials in the code.
Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.
MSI can be enabled through the Azure Portal. E.g., to enable MSI for App Service, the portal has an option as shown below.
Once enabled we can add an access policy in the key vault to give permissions to the Azure App service. Search by the app service name and assign the required access policies.
For an application to access the key vault, we need to use AzureServiceTokenProvider from Microsoft.Azure.Services.AppAuthentication NuGet package. Instead of using the ClientCredential or ClientAssertionCertificate to acquire the token, we will use AzureServiceTokenProvider to acquire the token for us.
The AzureServiceTokenProvider class tries the following methods to get an access token:-
- Managed Service Identity (MSI) - for scenarios where the code is deployed to Azure, and the Azure resource supports MSI.
- Azure CLI (for local development) - Azure CLI version 2.0.12 and above supports the get-access-token option. AzureServiceTokenProvider uses this option to get an access token for local development.
- Active Directory Integrated Authentication (for local development). To use integrated Windows authentication, your domain’s Active Directory must be federated with Azure Active Directory. Your application must be running on a domain-joined machine under a user’s domain credentials.
For the AzureServiceTokenProvider to work locally we need to install the Azure CLI and also setup an environment variable - AzureServicesAuthConnectionString. Depending on whether you want to use ClientId/Secret or ClientId/Certificate-based authentication the value for the environment variable changes.
1 2 3
As shown above, you can get the TenantId and AppId from the App Registrations page in the Azure portal. Clicking on the Endpoints button reveals a list of URL’s which has the TenantId GUID. The AppId is displayed against each of the AD application. Once you set the environment variable, the application will be able to connect to Key Vault without any additional configuration entries in web/app config.
Azure Managed Service Identity makes it easier to connect to Key Vault and removes the need of having any sensitive information in the application configuration file. It also helps remove the overhead of renewing the certificate/secrets used to connect to the Vault. One less thing to worry about the application!
As a developer, I often end up needing to manipulate text. Sometimes this text can get quite large, and it might take a while to do it manually. If you have a text editor under your tool belt, it often helps in situations like that. Let’s looks at one of the common scenarios that I come across and how we can solve that using a text editor. I use Sublime Text as my go-to editor for such text editing hacks, but you can do this in any text editor that supports simultaneous editing.
Let’s say I just get a list of comma separated values and need to insert double (or single) quotes around each value to use in a SQL query. To demonstrate this, I ended up going to random.org to generate a list of random values and had to use the same technique that I was to demonstrate as in the SQL query case. I generated 12 random numbers, and the site gave a tab separated list of values, as shown below.
1 2 3
I now need to convert this into a comma-separated list. Let’s see how we can go about doing this.
- Select the recurring character pattern. In this case, it is the tab space.
- Select all occurrences of the pattern. (Alt + F3 - Find All in Sublime)
- Act on all the occurrences. In this case, I want to remove them, so I use Del
- Since I want to introduce a comma between each of the numbers, I first split them into multiple lines using Enter. Now I have all the numbers on a separate line.
- Select all the numbers and insert a cursor at the end of each. ( Ctrl + Shift + L)
- Insert comma. We still have the cursor at the end of all lines, so just pressing Delete again combines all the lines into one. Remove the trailing comma.
Though this is a specific example, I hope you get the general idea on how to go about manipulating text, to split and combine as required. I hope you will be able to insert double (or single) quotes around each value in the comma separated values that we have now, to use in a SQL query!
Setting up the React Application
With the create-react-app template generator, it is easy to setup and get up and running a react application. All you need to run are the below commands, and you have everything set up for a react application.
1 2 3 4 5
The above commands will create an ‘electron-react’ folder with all the code and set up the app at *http://localhost:3000* in development mode.
Setting up Electron
Now that we have a react application setup, let us integrate electron with it. The below command installs electron package.
A basic Electron application needs just these files:
- package.json - Points to the app’s main file and lists its details and dependencies. We already have this as part of the react application.
- main.js - Starts the app and creates a browser window to render HTML. This is the app’s main process. We will add this file.
- index.html - A web page to render. This is the app’s renderer process. We already have this as part of the react application.
Let’s start by adding a main.js file. We will keep the code to the bare minimum. All we are doing here is adding a function createWindow which uses BrowserWindow from the electron package, to create a new window instance. The window loads the development server URL. We will modify this URL later to run independently without a hosted server so that it can be packaged and deployed easily. The app’s ready event is wired to create the new window.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
After updating the package.json with the electron application main entry point, we are all set to run the application.
Fire up two consoles and launch the react application in one using npm start and the electron application in the other using ‘set DEV_URL=http://localhost:3000 && electron .’
Setting up for Deployment
Opening up two consoles and starting up the react server first will start becoming a pain soon. To avoid this, we can use two npm packages to start both the tasks one after the other.
- concurrently: Run multiple commands concurrently.
- wait-on: Wait for files, ports, sockets, http(s) resources to become available
Install both the packages and modify the package.json as shown below.
1 2 3
Running npm start now launches the react application, waits for the server to be up and running and then launches the electron application.
The electron app depends on the react application being hosted locally to run. Let’s update main.js so that it can run from the generated output of the react application. Running npm build generates the website contents into the build folder.
1 2 3 4 5 6 7 8 9 10 11 12
Set the homepage property in package.json (“homepage”: “./”) to enable relative paths on the generated index.html file. Once this is done, we can generate the site using npm run build and run the electron application using ‘electron .’. This will launch the application from the build folder.
Hope this helps you to jump start with your Electron app development using React.
In the previous post, Generating a Large PDF from Website Contents - HTML to PDF, Bookmarks and Handling Empty Pages, we saw how to generate a PDF from HTML and add bookmarks to the generated PDF files. The PDF file generated is for an individual section which now needs to be merged to form a single PDF file. The individual PDF files contain the relevant content for the section and related bookmarks, which needs to be combined into a single PDF file.
One of the important things to keep intact when merging is the document hierarchy. The Sections, Sub-Categories, and Categories should align correctly so that the final bookmark tree and the Table of Contents appear correctly. It is best to maintain the list of individual PDF document streams in the same hierarchy as required. Since we know the required structure right from the UI, this can be easily achieved by using a data structure similar as shown below
1 2 3 4 5 6 7 8
The above structure allows us to maintain a tree-like structure of the document. The structure is the same as that is provided to the user to select the PDF options. I used the iTextSharp library to merge PDF documents. To interact with the PDF, we first need to create a PdfReader object from the stream. Using the SimpleBookmark class, we can get the existing bookmarks for the PDF.
iText representation of bookmarks is a bit complex. It represents them as an ArrayList of Hashtables. The Hashtable has keys like Action, Title, Page, Kids, etc. Kids property represents child bookmarks and is the same ArrayList type. Since it was hard to work with this structure, I created a wrapper class to interact easily with the bookmarks.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Recursively iterating through the list of DocumentSections, I add all the bookmarks to a root Bookmark class. The root bookmark class represents the full bookmark of the PDF file. The PageNumber is offset using a counter variable. The counter variable is incremented by the number of pages in each of PDF section (pdfReader.NumberOfPages) as it gets merged to the bookmark root. This ensures that the bookmark points to the correct bookmark page in the combined PDF file.
The individual documents are then merged by iterating through all the generated document sections. Once done we get the final PDF as a byte array which is returned to the user.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
To generate a Table of Contents (ToC), we can use the root bookmark information. We need to manually create a PDF page, read the bookmark text and add links to the page with the required font and styling. iText provides API’s to create custom PDF pages.
We are now able to generate a single PDF based on the website contents.
Very often I need to sign forms, receipts, invoices in PDF format and send them across to someone else. Printing the PDF, signing them physically and scanning them back (of course using Office Lens) is how I used to do this until a while back. Since I don’t have a printer at home, I always had to wait till I reach office. Also, I did not like wasting paper and ink just to put a signature.
Adobe PDF reader allows us to ‘Fill and Sign’ documents. Using this option we can add signatures without needing to print them. Follow the below steps to set up your Adobe Reader to sign any document.
1. Sign on a white paper and take a picture. Upload the picture onto your computer and crop the image using your favorite image editor. You should have something similar as shown below.
2. Open the PDF file that you need to sign with Adobe Reader.
3. Open ‘Fill and Sign’ option. You can do this either from the ‘Tools Pane’ (Shift + F4 on windows) or the menu ‘View -> Tools -> Fill and Sign.’
4. Under the Sign option, you can choose a signature image. Choose the image you created before and save.
You are all set to sign documents now. Anytime you want to sign a document, choose ‘Fill and Sign’ and you will see your signature under the Sign button. Click the signature and place it anywhere on the document that you want to sign. No more printing and scanning them back again.
Scanning physical documents can be cumbersome using a scanner, especially if you do not have easy access to one. Taking pictures with the default camera application on the phone might not give the best of results that you are expecting. Also, you will mostly end up needing to trim such photos of unwanted elements.
Microsoft Office Lens is the perfect application for scanning documents and whiteboards. Office Lens focuses documents in the camera frame and allows you to capture just what is required. It enhances the selected document sections. Below is an example of the highlight and the captured document.
- Capture and crop a picture of a whiteboard or blackboard, and share your meeting notes with colleagues.
- Make digital copies of your printed documents, business cards or posters, and trim them precisely.
- Printed text will be automatically recognized (using OCR) by converting Word and PDF, so you can search for words in images and copy and edit them.
‘I need some undistracted time.’
This was one of the things that came up in my team’s retrospective yesterday. Having some undistracted time is necessary for getting things done. It’s a good practice to have a consensus among the team members on how to manage disruptions and indicate whether you are open for a chat.
The Headphone Rule is an interesting way to indicate whether a person is open to interactions or not.
no headphones, you can talk to me.
1 headphone, you can talk to me about work
2 headphones, do not talk to me.
For people who do not use a headphone, some other technique needs to be used (like sticky notes, colored lights, etc.). Luckily in my team, everyone uses headphones, and it was an acceptable solution. Irrespective of the way you choose it is important to have some agreed way to indicate whether you are interruptible or not. It helps you and the team to have some undistracted time.
If you are a .NET developer and looking for some awesome free stuff, then check out Visual Studio Dev Essentials. You get loads of free stuff
Free tools, cloud services, and training
Get everything you need to build and deploy your app on any platform. With state-of-the-art tools, the power of the cloud, training, and support, it’s our most comprehensive free developer program ever.
Some of the key attractions of the program are
- $300 Azure Credit for a year
- Access to Xamarin University Training
- Pluralsight access for three months
- WintellectNOW access for three months
All you need is a Windows Live ID to signup. Get it if you have not already!
Last week was a busy one at NDC Sydney and was happy to be back there for the second time.The conference was three days long with 117 speakers, 37 technologies, and 151 talks. Some of the popular speakers were Scott Wlaschin, Scott Allen,Troy Hunt, Damian Edwards, Steve Sanderson and a lot more.
Each talk is one hour long and eight talks happen at the same time. Below are the talks I attended:
- Keynote: Using EEG and Machine Learning to Perform Lie Detection
- A teams transition to Continuous Delivery
- Docker, FROM scratch
- The Technical Debt Prevention Clinic
- How to start and run a software lifestyle business
- Asynchronous Programming From The Ground Up
- Building Docker Applications with .NET - tooling, cross platform support and migration
- Hack Your Career
- Writing high performance code in .NET
- Growing Serverless code with Azure Functions and F#
- “The website’s down!” Stories and lessons on keeping your website up
- Self-Aware Applications: Automatic Production Monitoring
- Domain Modeling Made Functional
- Interactive C# Development with Roslyn
- Building Resilient Applications In Microsoft Azure
- Functional Design Patterns
- Logic vs. side effects: functional goodness you don’t hear about
- How one team built their first microservice
All sessions are recorded and are available here. The Sydney 2017 ones will soon be there. Overall it was a good event but did not match the one last year. Last year there were more of the popular speakers and the talk content was also more interesting. But still, I am glad that NDC Sydney is still happening, and it gives a good exposure and networking possibilities for developers. Thanks to Readify for sponsoring my tickets and it’s one of the good things about working with Readify.
Hope to see you next year as well!