Managing Your Postman API Specs

    Organizing and managing your API specs either through Postman Cloud or your Source Control.

    In the previous post, we explored how to use Postman for testing API endpoints. Postman is an excellent tool to manage API specs as well, so that you can try API requests individually to see how things are working. It also acts as documentation for all your API endpoints and serves as a good starting point for someone new to the team. When it comes to managing the API specs for your application, there are a few options that we have and let’s explore what they are.

    Organizing API Specs

    Postman supports the concept of Collections, which are nothing but a Folder to group of saved API requests/Specs. Collections support nesting which means you can add Folders within a collection to further group them. As you can see below the MyApplication and Postman Echo are collections, and there are subfolders inside them which in turn contains API requests. The multi-level hierarchy helps you to organize your requests the way you want to.

    Postman Collections

    Sharing API Specs

    Any Collection that you create in Postman is automatically synced to Postman Cloud if you are logged in with an account. It allows you to share collections through a link. With paid version of Postman you get to create team workspaces, which means a team can collaborate on the shared versions. It allows easy sharing of specs across your team and manages them in a centralized place.

    However, if you are not logged in or don’t have a paid version of Postman, you can maintain the specs along with your Source Code. Postman allows you to export Collections and share specs as a JSON file. You can then check this file into your source code repository. Other team members can Import the exported file to get the latest specs. The only disadvantage with this is that you need to make sure to export/import every time you/other team members make a change to the JSON file. However, I have seen this approach work well in teams and one way we made sure that the JSON file was up to date is to have to update the API spec as a Work Item and which required to be peer reviewed(through Pull Requests)

    Managing Environments

    Typically any application/API would be deployed to multiple environments (like localhost, Development, Testing, Production, etc.) and you would want to switch between these environments to test your API endpoints seamlessly. Postman makes this easy by using the Environment Feature.

    Postman Environment

    Again as with Collections, Environments are also synced to Postman Cloud when you are logged in. It makes all your environments available to all your team seamlessly. However, if you are not logged in you can again export the environments as a JSON file and then share that out of band (in a secure manner as this might have sensitive information like tokens, keys, etc.) with your team.

    Publishing API Specs

    Postman allows you to publish API specs (even to a custom URL), which can act like your API Documentation. You can publish it per environments and also easily execute them. Publishing is available only if you log in to an account as it requires the API Specs and environment details in the first place.

    Postman Published

    Security Considerations

    When using the sync feature of Postman (logged in to the application with Postman account), it is recommended that you do not have any sensitive information (like passwords/tokens) as part of the API request spec/Collection. These should be extracted out as Environment variables and stored as part of the appropriate environment.

    If you are logged in, all the data that you add to it is automatically synced, which means it will be living in Postman’s cloud server. This might not be a desirable option for every company but looks like there is no option to turn sync off at the Collection level. The only way to not sync collections is to not log into an account in Postman.

    If you are logged into Postman then any collection that you create is automatically synced to Postman server. Only way to prevent sync is not to log in

    We have seen the options by which you can share API collections and environments amongst your team even if you are logged in. However, one thing to be aware of is if any of your team members are logged into Postman and imports a collection shared via Repository/out of band methods, it will be synced to Postman server. So at the organization/team level, you would need ways to prevent this from happening if it is essential for you. Best is to have your API’s designed in such a way that you do not have to expose such sensitive information, which anyways is a better practice.

    Hope this allows to manage your API specs better!

    Automated API Testing Using Postman Collection Runner

    Quick and easy way to test your API.

    A while back we looked at how we can use Postman to chain multiple requests to speed up out Manual API Testing. For those who are not familiar with Postman, it is an application that assists in API testing and development, which I see as sitting a level top of a tool like Fiddler.

    In this post, we will see how we can use Postman to test some basic CRUD operations over an API using a feature called Postman Runner. Using this still involves some manual intervention. However, we can automate them using a combination of different tools.

    Setting Up the API

    To start with I create a simple API endpoint using the out of the box Web API project from Visual Studio 2017. It is a Values Controller which stores key-value pairs to which you can send GET, POST, DELETE requests. Below is the API implementation. It is a simple in-memory implementation and does not use any persistent store. However, the tests would not change much even if the store was to be persistent. The importance here is not in the implementation of the API, but how you can use Postman to add some quick tests.

    public class ValuesController : ApiController
    {
        static Dictionary<int, string> values = new Dictionary<int, string>();
    
        public IEnumerable<string> Get()
        {
            return values.Values;
        }
    
        public IHttpActionResult Get(int id)
        {
            if (values.ContainsKey(id))
                return Ok(values[id]);
    
            return NotFound();
        }
    
        public IHttpActionResult Post(int id, [FromBody]string value)
        {
            values[id] = value;
            return Ok();
        }
    
        public IHttpActionResult Delete(int id)
        {
            if (!values.ContainsKey(id))
                return NotFound();
    
            values.Remove(id);
            return Ok();
        }
    }

    Setting Up Postman

    To start with we will create a new Collection in Postman to hold our tests for the Values Controller - I have named it ‘Values CRUD - Test’. The collection is a container for all the API requests that we are going to write. First, we will add all the request definitions into postman which we can later reorder for the tests.

    Postman Request

    The {{ValuesUrl}}/{{ValueId}} in the URL are parameters defined as part of the selected Environment. Environments in Postman allow you to switch between different application environments like Development, Test, Production. You can configure different values for each environment and Postman will send the requests as per the configuration.

    Below are the environment variables for my local environment. You can define as many environments as you want and switch between them.

    Postman Environment

    Now that I have all the request definitions for the API added let’s add some tests to verify our API functionality.

    Writing The First Test

    Postman allows executing scripts before and after running API requests. We did see this in the API Chaining post where we grabbed the messageId from the POST request and added it to the environment variable for use in the subsequent requests. Similarly, we can also add scripts to verify that the API request returns expected results, status code, etc.

    Let’s first write a simple test on our GET API request that it returns a 200 OK response when called. The below test uses the Postmans PM API to assert that status code of the response is 200. Check the _Response Assertion API in test scripts_ to see the other assertion options available like pm.response.to.have.status. The tests are under the Tests section similar to where wrote the scripts to chain API requests. When executing the API request, the Tests tab shows the successful test run for the particular request.

    pm.test("Status code is 200", function() {
      pm.response.to.have.status(200);
    });
    

    Postman Tests

    Similarly, you can also write Pre-request Script to set variables or perform any other operation. Below I am setting the Value environment variable to “Test”. You could generate a random value here or set a random id, or set an identifier that does not already exists. It’s test/application specific, so leave it you to decide what works best for you.

    pm.environment.set("Value", "Test");
    

    Collection Runner

    The collection runner allows you to manage multiple API requests and run them as a set. Once completed it shows a summary of all the tests included within each request and details of tests that passed/failed in the run. You can target the Runner to run against your environment of choice.

    Postman Collection Runner

    Running these tests still involves some manual effort of selecting environments and running them. However using Newman, you can run Postman Collections from the command line, which means even in your build pipeline.

    Using Postman, we can quickly test our API’s across multiple environments. The Collection Runner also shows an excellent visual summary of the tests and helps us in API development. However, I found these tests to violate the DRY principle. You need to repeat the same API request structure if you have to use them in a different context. Like in the example above I had to create two Get Value By Id requests to test for the value existing and also for when it does not exists. You could use some conditional looping inside the scripts, but then that makes your tests complicated and gets into the loop of how to test your tests. Postman does allow you to export the API request to the language of your choice. So once you have the basic schema, you can export them and write tests that compose them. I find Postman tests and the Runner a quick way to start testing your API endpoints and then for more complicated cases use a stronger programming language. Having the tests in Postman also allows us to have an API spec in place and can be useful to play around with the API.

    Tip of the Week: Reading Eggs - Learning To Read Can Be Easy And Fun

    A fun, easy and effective way for kids to learn to read.

    A couple of months back Gautham (my son) started playing Reading Eggs. We started off initially with a free trial after it being recommended by our friend Asha. We got a 21-day extended trial in addition to the initial 21 days free trial (i.e a total of 42 days), which helped a lot with confirming that Gautham was to use this app. We noticed that he was reading small words quite comfortably and he grew an interest to read various things around. We took an annual subscription and feels totally worth it.

    Reading Eggs Levels

    Using the five essential keys to reading success, the program unlocks all aspects of learning to read for your child.

    • The lessons use colourful animation, fun characters, songs, and rewards to keep children motivated.
    • The program is completely interactive to keep children on task.
    • When children start the program, they can complete a placement quiz to ensure they are starting at the correct reading level.
    • Parents can access detailed progress reports as well as hundreds of full-colour downloadable activity sheets that correspond with the lessons in the program.
    • The program includes over 2000 online books for kids – each ending with a comprehension quiz that assesses your child’s understanding.

    Each level explores different letters/word combinations and has a knowledge quiz at the end of it to pass to the next level. The program unlocks all aspects of learning to read for your child, focusing on a core curriculum of phonics and phonemic awareness, sight words, vocabulary, comprehension, and reading for meaning.

    Do give the app a try if you have kids at home!

    Exploring AzureKeyVaultConfigBuilder

    Various usage scenarios of Azure Key Vault as a Visual Studio Connected Service

    Over the last weekend, I was playing around with Visual Studio Connected Services support for Azure Key Vault. The new feature allows seamless integration of ASP.NET Web applications with Azure Key Vault, making it as simple as using the ConfigurationManager to retrieve the Secrets from the Key Vault - just like you would retrieve it from the config file.

    In this post, we will look detailed into the AzureKeyVaultConfigBuilder class that allows the seamless integration provided by Connected Services. As we saw in the previous post when you add Key Vault as a Connected Service, it modifies the applications configuration file to add in the AzureKeyVaultConfigBuilder references.

    Make sure to update the Microsoft.Configuration.ConfigurationBuilders.Azure and Microsoft.Configuration.ConfigurationBuilders.Base Nuget packages to the latest version.

    Loading Connection String and App Settings

    The AzureKeyVaultConfigBuilder can be specified on both appsettings and connectionString element using the configBuilders attribute.

     <appSettings configBuilders="AzureKeyVault">
     ...
     </appSettings>
     <connectionStrings configBuilders="AzureKeyVault">
     ...
     </connectionStrings>

    Accessing Multiple Key Vaults

    The configBuilders element supports comma-separated list of builders. Using this feature, we can specify multiple Vaults as a source for our secrets. Note how we pass in ‘keyVault1,keyVault2’ to configBuilders option below.

    <configBuilders>
        <builders>
          <add
            name="keyVault1"
            vaultName="keyVault1"
            type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />
    
          <add
            name="keyVault2"
            vaultName="keyVault2"
            type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />
        </builders>
      </configBuilders>
      <appSettings configBuilders="keyVault1,keyVault2">
      ...
      </appSettings>

    If the same key has a value in multiple sources, then the value from the last builder in the list takes precedence. (But I assume you would not need that feature!)

    Modes

    All config builders have the options of setting a mode, which allows three options.

    • Strict - This is the default. In this mode, the config builder will only operate on well-known key/value-centric configuration sections. It will enumerate each key in the section, and if a matching key is found in the external source, it will replace the value in the resulting config section with the value from the external source.

    • Greedy - This mode is closely related to Strict mode, but instead of being limited to keys that already exist in the original configuration, the config builders will dump all key/value pairs from the external source into the resulting config section.

    • Expand - This last mode operates on the raw XML before it gets parsed into a config section object. It can be thought of as a simple expansion of tokens in a string. Any part of the raw XML string that matches the pattern ${token} is a candidate for token expansion. If no corresponding value is found in the external source, then the token is left alone.

    In short when set to Strict it matches the names in configuration file to Secrets in the Vault’s configured. If it does not find corresponding Secret it ignores that key. When set to Greedy, irrespective of what keys are there in the configuration file, it makes all the secrets available in the Vaults specified via Configuration. This to me sounds like magic and would not prefer to do in an application that I build.

    Greedy Mode Filtering and Formatting Secrets

    When using Greedy mode, we can filter on the list of keys that are made available by using the prefix option. Only Secret Names starting with the prefix is made available in the configuration. The other secrets are ignored. This feature can be used in conjunction with stripPrefix option. When stripPrefix is set to true (defaults to false), the Secret is made available in the configuration after stripping off the prefix.

    For e.g. if we have a Secret with the name connectionString-MyConnection, having the below configuration will add the connection string with name MyConnection.

    <add
      name="keyVault1"
      vaultName="keyVault1"
      prefix="connectionString-"
      stripPrefix="true"
      type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />
    var connectionString = ConfigurationManager.ConnectionStrings["MyConnection"];

    Use prefix and stripPrefix in conjunction with the Greedy mode. For keys mentioned in the config it will try to match it with the prefix appended to the key name

    Preloading Secrets

    By default, the Key Vault Config Builder is set to preload the available Secrets in key vault. By doing this the config builder knows the list of configuration values that the key vault can resolve. For preloading the Secrets, the config builder uses the List call on Secrets. If you don’t have list access on Secrets you can turn this feature off using the preloadSecretNames configuration option. At the time of writing the config builder version (1.0.1) throws an exception when preloading Secrets in turned on and List policy is not available on the Vault. I have raised a PR to fix this issue, which if accepted would no longer throw the exception and would invalidate this configuration option.

    <builders>
        <add
        name="keyVault1"
        preloadSecretNames="false"
        vaultName="keyVault1"

    Authentication Modes

    The connectionString attribute allows you to specify the authentication mechanism with Key Vault. By default when using the Connected Service to create the key vault it adds the Visual Studio user to the access policies of the Key Vault. When connecting it uses the same. But this does not help you in a large team scenario. Most likely the Vault will be created under your organization subscription and you might want to share the same vault between all developers in the team. You could add the users individually and give them the appropriate access policies, but this might soon become cumbersome for a large team. Instead of using the Client Id/Secret or Certificate authentication along with Managed Service Identity configuration for localhost works the best. The configuration provider will then use the AzureServicesAuthConnectionString value from environment variable to connect to the key vault.

    Set AzureServicesAuthConnectionString Environment variable
    RunAs=App;AppId=AppId;TenantId=TenantId;AppKey=Secret.
    Or
    RunAs=App;AppId=AppId;TenantId=TenantId;CertificateThumbprint=Thumbprint;CertificateStoreLocation=CurrentUser

    As you can see the AzureKeyVaultConfigBuilder does provide good integration with Key Vault and makes using the Key Vault seamless. It does have a few issues, especially around handling different Secret versions, which might be fixed in future releases.

    PS: At the time of writing there were a few issues that I had found while playing around. You can follow up on the individual issues on Github. Fingers crossed hope at least one of my PR’s makes its way through to master!

    Azure Key Vault As A Connected Service in Visual Studio 2017

    Getting started with Key Vault is now more seamless!

    Visual Studio (VS) now supports adding Azure Key Vault as a Connected Service, for Web Projects ( ASP.NET Core or any ASP.NET project). Enabling this from the Connected Service makes it easier for you to get started with Azure Key Vault. Below are the prerequisites to use the Connected Service feature

    Prerequisites

    • An Azure subscription. If you do not have one, you can sign up for a free account.
    • Visual Studio 2017 version 15.7 with the Web Development workload installed. Download it now.
    • An ASP.NET 4.7.1 or ASP.NET Core 2.0 web project open.

    Visual Studio, Azure Key Vault Connected Services

    When selecting ‘Secure Secrets with Azure Key Vault’ option from the list of Connected Services provided it takes you to a new page within Visual Studio with your Azure Subscription associated with Visual Studio Account and gives you the ability to add a Key Vault to it. VS does generate some defaults for the Vault Name, Resource Group, Location and the Pricing Tier which you can edit as per your requirement. Once you confirm to the Add the Key Vault, VS provisions the Key Vault with the selected configuration and modifies some things in your project.

    Visual Studio, Azure Key Vault Connected Services

    In short, VS adds

    • a bunch of NuGet packages to access Azure Key Vault
    • Adds in the Keyvault Url details
    • In ASP.NET Web project VS modifies the configuration file to add in the AzureKeyVaultConfigBuilder as shown below.
    <configuration>
    <configSections>
    <section
          name="configBuilders"
          type="System.Configuration.ConfigurationBuildersSection, System.Configuration, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
          restartOnExternalChanges="false"
          requirePermission="false" />
    </configSections>
    <configBuilders>
    <builders>
    <add
          name="AzureKeyVault"
          vaultName="webapplication-47-dev-kv"
          type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral"
          vaultUri="https://WebApplication-47-dev-kv.vault.azure.net" />
    </builders>
    </configBuilders>

    To start using Azure Key Vault from your application we first need to add some Secrets to the Key Vault created by Visual Studio. You can add a secret to the portal using multiple ways, the most straightforward being using the Azure Portal. Once you add the Secret to the Key Vault, update the configuration file with the Secret names. Below is how you would do for an ASP.NET Web Project. (MySecret and VersionedSecret keys)

    Make sure to add configBuilders="AzureKeyVault" to the appSettings tag. This tells the Configuraion Manager to use the configured AzureKeyVaultConfigBuilder
    <appSettings configBuilders="AzureKeyVault">
          <add key="webpages:Version" value="3.0.0.0" />
          <add key="webpages:Enabled" value="false" />
          <add key="ClientValidationEnabled" value="true" />
          <add key="UnobtrusiveJavaScriptEnabled" value="true" />
          <add key="MySecret" value="dummy1"/>
          <add key="VersionedSecret" value="dummy2"/>
    </appSettings>

    The values dummy* are just dummy values and will be overridden at runtime from the Secret Values created in the Key Vault. If the Secret with the corresponding name does not exist in Key Vault, then the dummy values will be used.

    Authentication

    When VS creates the Vault, it adds in the user logged into VS to the Access Policies list. When running the application, the AzureKeyVaultConfigBuilder uses the same details to authenticate with the Key Vault.

    If you are not logged in as the same user or not logged in at all the provider will not be able to authenticate with the Key Vault and will fallback to use the dummy values in the configuration file. Alternatively you could specify connection option avaiable for AzureServiceTokenProvider

    Visual Studio, Azure Key Vault Connected Services

    Secrets and Versioning

    The AzureKeyVaultConfigBuilder requests to get all the Secrets in the Key Vault at application startup using the Secrets endoint. This call returns all the Secrets in the Key Vault. For whatever keys in the AppSettings that has a match with a Secret in the vault, a request is made to get the Secret details, which returns the actual Secret value for the keys. Below are the traces of the calls going out captured using Fiddler.

    AzureKeyVaultConfigBuilder Fiddler Traces

    It looks like at the moment the AzureKeyVaultConfigBuilder get only the latest version of the Secrets. As you can tell from one of my Secret names (VersionedSecret), I have created two versions for the Secret, and the config builder picks the latest version. I don’t see a way right now whereby I can specify a specific secret version.

    The Visual Studio Connected Services makes it easy to get started with Azure Key Vault and move your secrets to a more secure store, than having it around in your configuration files.

    Tip of the Week: Prettier - An Opinionated Code Formatter

    Format your code fast, easy and consistent.

    Code Formatting is an essential aspect of writing code, and I did write about this a while back on introducing code formatting into a large code base. It’s not about what all rules you and your team use, it’s about sticking with the conventions and using them consistently. Code Formatting rules are best when applied automatically, and the developer does not need to do anything in particular about it.

    Prettier is an opinionated code formatter, which supports multiple languages and editors and easy to get started. Getting set up is as easy as just installing the prettier package using yarn/npm. There are multiple points at which you can integrate Prettier - in your editor, pre-commit hook or CI environments.

    Most of the IDE’s have plugins for Prettier which makes it easy to get it into the code right from the beginning. You might need to update your IDE settings to run prettier when you save a file. For VS Code I have to set editor.formatOnSave to true to turn on this behaviour.

    As the title says, Prettier is opinionated, which is useful in many ways and removes much time wasted on unnecessary discussions. However, it does provide some configuration options. Check if it provides enough for you to call a meeting to decide on one of them :).

    Write prettier code!

    Tip of the Week: Check If You Have Been Hacked!

    Check if you have an account/password that has been compromised in a data breach

    https://pixabay.com/en/data-security-keyboard-computer-1590455/

    With more and more data breaches happening it is possible that your personal information and passwords are already compromised. If you have been lazy and reusing passwords (just like me until a while back) across multiple sites then it is good to check if your password is already compromised. It necessarily need not be one of your social media account or bank account that needs to be compromised for an attacker to get your credentials. If you have been reusing passwords across sites, it might be that one site where security is not given much importance for that gets breached, exposing your credentials to the attacker or anyone who has the breached data. Often hackers use this information to try and enumerate other sites, social network, bank logins to try and login assuming the behaviour of password reuse.

    Check your email or password to see if any has been part of a data breach.

    To check if you have been part of a data breach you can use the service haveibeenpwned. If you have been part of any data breaches, then it will show you the details. In addition to that, you can also use the Pwned Passwords list to check if the password that you use has been part of any data breaches. It’s good to change your password if you find yours in there. If you are worried about entering your password in haveibeenpwned site, the good thing is that it uses k-anonymity model, which means that your full password is not sent across the wire.

    Next Steps

    • Update your passwords on all sites that you use if you have been reusing passwords. If you don’t have much time to do this in one shot, you can do this incrementally as and when you next visit them.
    • Make sure you have unique passwords for each of the site. A good password is one that you cannot remember. So if you are not using a Password Manager it’s a good idea to start using. If you don’t want to spend money on a password manager, you can always use a random password generator to generate one for you. Remembering that password might be hard, you could either write it down or save it in the browser (not that I am recommending it over getting a Password Manager, but better than reusing passwords).

    NDC Security 2018 - Overview and Key Takeaways

    Some key takeaways from the security conference held in Gold Coast.

    While in Sydney I was lucky enough to have attended the first and second NDC Conferences. After moving up to Brisbane, did not think I could attend one of these soon. However, then comes a nice shorter version of NDC specific to Security - NDC Security. As the name suggests, this conference is particular to security-related topics with a 2-day workshop and 1-day conference, as was held in Gold Coast, Queensland.

    The Workshop

    Troy Hunt and Scott Helme ran two workshops and I attended Hack Yourself First by Troy. The workshop covers a wide range of topics and is perfect for anyone who is into web development. The best thing is that you only need to have a browser and Fiddler/Charles Proxy (depending on whether you are on Windows or Mac land). One of the interesting thing about the workshop is that it puts you first into the hackers perspective and forces you to exploit existing vulnerabilities in the sample site designed specifically for this. Once you can do this, we then look at ways of protecting ourselves against such exploits and other mechanisms involved.

    Hack yourself first, Troy Hunt

    The workshop highlights how easy it is to find and exploit vulnerabilities in applications. Some tools detect vulnerabilities and exploit them for you if you input a few details to them. You necessarily need not know the vulnerabilities itself or how exactly to exploit them. Such tools make it easy for people to use them on any website that is out there on the web. Combined with the power of search engines it makes it quite easy to make your site vulnerabilities to be easily discoverable.

    The Conference

    There were six talks in total and below are the ones that I found interesting.

    NDC Securtiy, 2018 - Conference

    The whole web is on a journey towards making it more secure. So it is an excellent time to move on to HTTPS if you are not already. Even after enabling HTTPS, it is a good idea to make sure you have got all the appropriate security headers set. Making sure that the libraries that you depend on are patched and updated is equally essential. There are incidents of massive data breaches because of vulnerabilities in third-party libraries and not keeping them updated.

    Functionality need not be the only reason to upgrade third-party libraries. There might be security vulnerabilities that are getting patched which is an equally good reason to update dependent packages

    The harder thing is to keep track of the vulnerabilities that are getting reported and always checking back with your application’s dependencies. There is a wide range of tools that help make this easy and seamlessly integrate within the development workflow. It can be included as early as when a developer intends to include a library into the source code, or in the build pipeline or even for sites that are up and running. The earlier such issues get detected in the software development lifecycle, the less costly and impact it has on time and cost.

    Tools

    The conference ended with a good discussion between Troy and Scott on how everything is Cyber broken. It touches upon the value of Extended Validation (EV) Certificate and how CA’s are trying to push for them while browsers are more and more going away from them. It also touches on various proponents of HTTP and the wrong messages that are getting spread to a broader audience and also about certificate revocations and a lot more. It was a fun discussion and a great end to the three-day event.

    Location and Food

    NDC Security was held at QT Gold Coast, Queensland and well organized. Coffee and drinks were available all throughout the day with a barista on the last day (which was cool). Food was served at start, breaks, and lunch and was good. The conference rooms were great and spacious and had reasonable good internet. Did not face much connectivity issues and everything ran smoothly.

    NDC Securtiy, 2018 - Food and Location

    One of the things I first did after coming from the conference was to move this blog over to HTTPS. I had been procrastinating long on this, but there were enough reasons to make a move now. Also, there are a bunch of things that catch my eye at client places and other web sites that I visit often. Attending the conference and workshop has been a great value add and recommend to anyone if you have a chance to attend that. For the others, most of the content is available in Pluralsight.

    PS: Special thanks to Readify for sending me to this conference and also providing a ‘paid vacation (accommodation)’ in Gold Coast. It was a nice three-day break for my wife and son also.

    Tip of the Week: Authy - Sync Two Factor Authentication Across Devices

    2FA across multiple devices with cloud backups.

    Two Factor Authentication (2FA) is becoming more and more common these days and is a good way to protect your accounts from getting into the wrong hands. SMS and App based 2FA are more common with the day to day services that we use, like Gmail, Outlook, Facebook etc. Enabling 2FA the user is prompted for a number that gets sent to them via phone or generated using an application, in addition to the username and password, when logging in. Enabling 2FA protects your account a level further. Even if an attacker has your credentials from a data breach, they would still need access to your phone to log in to your account. Using an app to generate the codes is more preferable than using SMS as it does not require internet connectivity or mobile service.

    Until lately I have been using Google Authenticator to generate codes for all the accounts that I have 2FA enabled. The app does work well on a single mobile device but becomes a pain when you want to switch phones or lose the phone. You could potentially be locked out of your accounts if you lose the phone and don’t have the backup codes available.

    Authy

    Authy is one of the best-rated 2FA application which targets exactly the issues with Google Authenticator. It is easy to setup, can be secured via TouchId/Password, supports encrypted backups and syncs across multiple applications and devices. Once setup any code that you add to your app gets synced through Authy servers and is all encrypted and secured. Authy has applications for the mobile, desktop and also has a plugin for Chrome browser. You can also manage devices from the account and revoke a device if it gets lost or is not used anymore. Authy vs Google Authentication post covers in detail all the differences between the two and the advantages of using Authy.

    Check out Authy and do setup 2FA if you are not already!

    If you are here and reading this probably you have a website and is serving it over HTTP. If you are unsure of whether your site needs HTTPS or not, don’t think twice - YES, YOUR SITE NEEDS HTTPS.

    If you are not convinced check out https://doesmysiteneedhttps.com/. One of the main reasons that I have seen (including me) why people have shied away from having HTTPS on sites was cost. And this post explains how to get HTTPS for free. But make sure you are getting it for the correct reasons and you know exactly what you are getting

    Depending on how you are hosting you could possibly take two routes to enable HTTPS on your site. Let’s look at them in detail.

    Option 1 - Get your Certificate and Add to Your Host

    If your hosting service already allows you to upload a custom domain certificate, but you were just holding back because of the extra cost of getting a certificate, then head over to Let’s Encrypt to get your free certificate. Again depending on your hosting provider and the level of access that you have on your web server, Let’s Encrypt has muliple ways on how you can get a certificate.

    What does it cost to use Let’s Encrypt? Is it really free?
    We do not charge a fee for our certificates. Let’s Encrypt is a nonprofit, our mission is to create a more secure and privacy-respecting Web by promoting the widespread adoption of HTTPS. Our services are free and easy to use so that every website can deploy HTTPS.

    We require support from generous sponsors, grantmakers, and individuals in order to provide our services for free across the globe. If you’re interested in supporting us please consider donating or becoming a sponsor.

    In some cases, integrators (e.g. hosting providers) will charge a nominal fee that reflects the administrative and management costs they incur to provide Let’s Encrypt certificates.

    Option 2 - CloudFlare

    If you are like me on a shared/cheaper hosting service it is more likely that your hosting plan does not support adding SSL certificates. You will be forced to upgrade to a higher plan to upload a certificate, which in turn will cost you more. In this case, you can use Cloudflare, to enable HTTPS for free.

    Cloudflare provides lots of features for websites, but in our case, we are more interested in what the Free plan gives us. It gives us a Shared SSL Certificate and also added benefits of Global CDN.

    Cloudflare acts as a reverse proxy between you and the server hosting this web page, which simply means that all requests now go through Cloudflare which in turn reaches out to the web server, if it cannot find a locally cached copy. So this also means that there are now reduced number of calls to the web server as Cloudflare would serve it from its cache if already available.

    Shared SSL is what is more interesting for us as part of this blog post. What shared SSL gives us is free HTTPS for our website. We get a Domain Validated (DV) certificate, with a small catch. It is not issued to our domain but to a shared Cloudflare domain server (sni154817.cloudflaressl.com in my case). If you want a custom SSL certificate then you need to be on a paid plan.

    Cloudflare supports multiple SSL settings - Off, Flexible SSL, Full SSL, Full SSL(Strict). Depending on how your host is setup you can choose one of the options. Since I am using Azure Web Apps to host, it supports https over *.azurewebsites.net subdomain. But since the certficate is not for my custom domain name (rahulpnath.com), I have set the SSL setting to Full SSL. Cloudfare in this case will connect over HTTPS but not validate the certificate. If your host does not support HTTPs connection (for free) you can use Flexible SSL.

    You can also choose to enable Cloudflare with Full SSL(Strict) if you have followed Option 1 and have a custom SSL certificate for the domain. This will give you the added benefits that Cloudfare provides.

    Enabling HSTS Preload

    Now that you have HTTPS setup on your domain with either of the options above, we can see that the website is now accessible over HTTPS. However, when you make the very first request to the website, the request goes over HTTP which then redirects over to HTTPS, after which the communication happens over a secure channel. However, there is a risk where the very first request can be intercepted and cause undesired behaviour.

    _Trust on first use (TOFU), or trust upon first use (TUFU), is a security model used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint._

    By setting the STS(Strict-Transport-Security) header along with the preload directive, we can then add our domain to the HSTS Preload list. By adding your domain into this list it is literally getting hardcoded into source code of browsers (like for e.g Chrome here). So anytime a request is made to a site it is checked against this hardcoded list available in memory and if present the request goes as HTTPS from the very first. You can set all subdomains for your domain as well as HSTS preloaded. Make sure you have all subdomains are served over HTTPS so that you do not lock yourself out on those sites. You can find more details on HSTS here.

    Now that the cost factor is out of making your site support HTTPS, is there anything else that is holding you back? If speed is a concern and it worries that encryption/decryption at both ends of communication is going to slow you down take a look at this post on HTTPS’ massive speed advantage. If you are still not convinced let me give it one last shot to get you on board. Going forward most modern browsers are going to default to the web as a secure place. So instead of the present positive visual security indicators, it would start showing warnings on pages served over HTTP. That means soon your sites would start showing Not Secure if you are not moving over to HTTPS.

    I don’t see any reason why we should still be serving our sites over HTTP. As you can see I have moved over to the HTTPS and have added this domain to the preload list as well. Let’s make the web secure by default!