How many times have you had to update the API code to return an empty list or throw an error to test edge case scenarios?

    How do you demo these different API scenarios to someone?

    The pages and components of our application have different states. Most of these states depend on the data returned from the server via the API. Often it’s hard to simulate the different scenarios for the API, and we stick with the ‘happy path scenario’ - the one that happens most of the time. Writing code or testing for the happy path scenario is easy, as the API endpoint is most likely to behave that way. It is tricky to develop or test the edge case scenarios and are often ignored or left untested.

    Let’s take an example of a simple list page of Quotes. Some of the scenarios for this endpoint are - no quotes available, some quotes available, request to server errors, and more. The ‘happy path scenario’ here is some quotes existing, and most of our development and testing will be against that. It will be good if we can simulate different application scenarios using a FAKE JSON Server API. It will allow us to simulate any use case or scenario that we want, allowing us to write code for it. Not to mention that testing, demoing, and writing automated tests all becomes easier.


    In this post, we will look at how to set up a fake JSON Server API to return data based on scenarios we specify. This post describes an approach that you can adapt to your application and the scenarios you have. If you are new to setting up a fake API, check out how to Set up Up A Fake REST API Using JSON Server

    Specifying Scenarios to JSON Server

    To start with, we need to specify which scenario we are interested in when calling the API. A scenario could be specific to one API endpoint or multiple. On an API endpoint, the best place to pass extra data is request headers, as it is least intrusive. We need to send the scenarios only in our development environment and it does not hurt to add an extra header to every HTTP request.

    To request for a particular scenario to the FAKE API, we pass it as part of the ‘scenarios’ header.

    Let’s modify the router.render in JSON Server to return the data based on the scenarios specified in the request header. Based on the values in the request header, we can filter the data and update the response.

    router.render = (req, res) => {
      const scenariosHeaderString = req.headers["scenarios"];
      const scenariosFromHeader = scenariosHeaderString
        ? scenariosHeaderString.split(" ")
        : [];
    
      ...
    }

    Below are some sample values that the header can have for our quote list scenario

    // Different options that the scenarios header can have
    
    scenarios: "open" // Only Quotes in open status
    scenarios: "draft phone" // All quotes in draft and has a phone
    scenarios: "error-quotes" // Server error getting the quotes
    scenarios: "no-quotes" // Empty list of quotes
    sceanrios: "" // All quotes

    Organizing Scenarios and Mock Data in JSON Server

    For the mock data, let’s add an extra attribute to indicate the scenarios that apply to that particular quote. Eg. The mock quotes defined in quotes.ts is updated as below with an additional sceanrios array property that takes in a list of scenarios applicable to the quote. Based on the state of the quote object the scenarios will differ. We filter the response data based on the scenarios attribute and return only the ones that match all attributes in the header.

    For, e.g., when the scenario header is ‘draft phone’ only the quotes that have both values in the scenarios property are returned. To return an empty list send a scenario value that does not exists on any of the mock quotes; ‘no-quotes’ for example. When the scenarios header is empty, it returns all quotes. Handling error responses is a bit different and will look at it in detail a bit later.

    const quotes: (QuoteDto & Scenarios<QuoteScenario>)[] = [
      {
        scenarios: ["draft", "no-phone"],
        statusCode: QuoteStatusCode.Draft,
        customer: { ... },
        mobilePhone: null,
        ...
      },
      {
        scenarios: ["draft", "phone"],
        statusCode: QuoteStatusCode.Draft,
        customer: { ... },
        mobilePhone: { ... }
        ...
      },
      {
        scenarios: ["open", "phone"],
        statusCode: QuoteStatusCode.Open,
        customer: { ... }
        mobilePhone: { ... }
        ...
      },
    ];

    Mock quote data is still type safe using Intersection types feature of TypeScript.

    The quote type is now an Intersection Type- (QuoteDto & Scenarios)[] - that combines multiple types into one. This allows us to maintain type safety for the mock data, while also adding the new scenario property. To avoid typos on the scenarios, we have type safe scenarios list as shown below.

    export type QuoteScenario = "phone" | "no-phone" | "draft" | "open";
    export type UserScenario = "admin" | "salesrep";
    
    export interface Scenarios<T> {
      scenarios: T[];
    }

    Based on the generic type T, the scenarios property can have only the associated values. Based on your application and the scenarios applicable, add different types and values to represent them.

    Handling Scenarios and Modifying Response

    A ‘no-user’ in the scenarios header must not filter out all qoutes from the quotes endpoint. For each endpoint there is a set of associated scenario values applicable. This is a super set (includes all and more) of the scenario type (QuoteScenario, UserScenario etc).

    For an API request, we first filter out the scenarios in the header that apply to the current request endpoint using the getScenariosApplicableToEndpoint method. The headers are filtered so that we do not use a filter that does not apply to the current endpoint and filter out all the data.

    router.render = (req, res) => {
        let data = res.locals.data;
    
        if (scenariosHeaderString && Array.isArray(data) && data.length > 0) {
          const scenariosApplicableToEndPoint = getScenariosApplicableToEndpoint(
            url,
            scenariosFromHeader
          );
    
          const filteredByScenario = data.filter((d) =>
            scenariosApplicableToEndPoint.every(
              (scenario) => d.scenarios && d.scenarios.includes(scenario)
            )
          );
          res.jsonp(filteredByScenario);
        } else res.jsonp(data);
      }
    };
    
    // filter scenarios header based no the endpoint url
    export const scenariosForEndpoint = {
      "/api/quotes": ["phone", "no-phone", "draft", "open", "no-quotes"],
      "/api/users": ["admin", "salesrep", "no-user"],
    };
    
    export const getScenariosApplicableToEndpoint = (
      endpoint: string,
      scenarios: string[]
    ) => {
      const endpointScenarios = (scenariosForEndpoint[endpoint] as string[]) || [];
      return scenarios.filter((a) => endpointScenarios.includes(a));
    };

    The scenarios applicable to the current requested endpoint is used to filter the response data. Filtering the header scenarios can be expanded to include HTTP verbs (GET, PUT, POST, etc.) or any other criteria as required.

    Handling Error Scenarios

    Error responses do not depend on the mock data and have a separate flow. A list of custom responses are in the ‘customResponses.ts’ file. If the headers match any of the code and URL’s for the custom response, then the ‘response’ property is returned for that request.

    For e.g., If a request is for the ‘/api/quotes/’ endpoint with ‘error-quotes’ in the scenarios header, the response is overridden to match the associated response property from the JSON object below. Expand the filtering to include other conditions (like HTTP verbs) if required.

    const responses = [
      {
        urls: ["/api/quotes"],
        code: "error-quotes",
        httpStatus: 500,
        respone: {
          errorMessage: "Unable to get Quotes data. ",
        },
      },
      {
        urls: ["/api/users/me"],
        code: "error-user",
        httpStatus: 500,
        respone: {
          errorMessage: "Unable to get user data. ",
        },
      },
    ];

    The router.render method now handles this additional case to match the error responses as the first step.

    export const getCustomReponse = (url, scenarios) => {
      if (!scenarios || scenarios.length === 0) return null;
    
      return responses.find(
        (response) =>
          scenarios.includes(response.code) && response.urls.includes(url)
      );
    };
    
    router.render = (req, res) => {
      ...
      const customResponse = getCustomReponse(url, scenariosFromHeader);
    
      if (customResponse) {
        res.status(customResponse.httpStatus).jsonp(customResponse.respone);
      } else {
        ...
      }};

    Invoking Scenarios

    When requesting the API, pass the scenarios header to activate the different scenarios. Based on the values in the scenarios header, JSON Server will filter out the response data. Below is a sample request made with ‘draft’ in scenarios header, and it returns only quotes that have the ‘draft’ scenarios applied to it.

    GET http://localhost:5000/api/quotes HTTP/1.1
    Host: localhost:5000
    scenarios: draft
    
    HTTP/1.1 200 OK
    [
      {
        "id": "1",
        "scenarios": [
          "draft",
          "no-phone"
        ],
        "statusCode": "Draft",
        "lastModifiedAt": "2020-03-01T14:00:00.000Z",
        "customerName": "Rahul",
        "mobilePhoneDescription": null
      },
      {
        "id": "2",
        "scenarios": [
          "draft",
          "phone"
        ],
        "statusCode": "Draft",
        "lastModifiedAt": "2020-03-01T14:00:00.000Z",
        "customerName": "Rahul",
        "mobilePhoneDescription": "iPhone X"
      }
    ]

    To test edge case scenarios, it’s now about adding the appropriate scenario header to the API request and adding the proper data to the mock JSON server. Passing the proper header when making the API request allows us to develop/test against these scenarios quickly. In a follow-up post, we will see how we can use this in our front end app development and automated tests.

    Hope this helps you set up the different scenarios for your API.

    Let Azure Manage The Username and Password Of Your SQL Connection String

    Use Azure Managed Identities feature to connect to Azure SQL. One less sensitive information to manage.

    To connect to a SQL database, we usually use a connection string that has a username and password. We ensure that the connection string is stored and distributed securely.

    However, the problem here is the very existence of having something sensitive to protect.

    "ConnectionStrings": {
        "QuotesDatabase": "Server=tcp:quotetest.database.windows.net,1433;Database=quotes;User Id:<UserName>;Password:<YourPasswordHere>"
      }

    Azure SQL supports Azure AD authentication, which means it also supports the Managed Identity feature of Azure AD. With Managed Identity, we no longer need the User Id and Password to connect. The credential is managed automatically by Azure and allows us to connect to resources that support Azure AD authentication.

    In this post, let us look at how we can use Manage Service Identity to connect to Azure SQL from a web application running in Azure. Once set up, all we need is the database server details and the database name to connect to the database

    Using Azure AD Token to Connect to SQL

    Using the DefaultAzureCredential from Azure Identity SDK we can retrieve token from Azure AD. SqlConnection uses this token for authentication. Below is a sample code where the AccessToken property of the SqlConnection is the Azure AD token.

    var connectionString = Configuration.GetConnectionString("QuotesDatabase");
    services.AddTransient(a =>
    {
        var sqlConnection = new SqlConnection(connectionString);
        var credential = new DefaultAzureCredential();
        var token = credential
            .GetToken(new Azure.Core.TokenRequestContext(
                new[] { "https://database.windows.net/.default" }));
        sqlConnection.AccessToken = token.Token;
        return sqlConnection;
    });

    When using Entity Framework, we need to use a slight workaround until EF Core will get full support for Azure AD token access. The easiest way to set up is to set the token for the underlying SqlConnection for EF explicitly. Also, check out this gist, for a different solution.

    public QuoteContext(DbContextOptions options) : base(options)
    {
        var conn = (Microsoft.Data.SqlClient.SqlConnection)Database.GetDbConnection();
        var credential = new DefaultAzureCredential();
        var token = credential
                .GetToken(new Azure.Core.TokenRequestContext(
                    new[] { "https://database.windows.net/.default" }));
        conn.AccessToken = token.Token;
    }

    Setting Up SQL Server For Managed Identity

    To manage Azure SQL for AD identities, we need to connect to SQL under the Azure user context. To do this, let us set up an Azure AD user as a SQL admin. It can be done from the Azure Portal under the Azure Directory Admin option for the database server, as shown below.

    Using the SQL AD Admin credentials, you can connect via SQL Server Management Studio or sqlcmd and grant other AD identities access. The below script grants the user ‘db_datareader, db_datawriter, and db_ddladmin’ access.

    CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
    ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
    ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
    ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
    GO

    <identity-name> is the name of the managed identity in Azure AD. For a system-assigned identity, the name is the same as the App Service name. It can also be an Azure AD Group (use the group name in this case). It gives you multiple options on how you want to manage access to the database. For local development, you can either create a separate AD application and use the ClientId/Secret for EnvironmentCredential, or add all developers to an Azure AD group and grant the AD group access or explicitly add in each user to the database.

    No longer we need any credentials to connect to the SQL database running on Azure. This makes it one less sensitive information to manage for our application.

    "ConnectionStrings": {
        "QuotesDatabase": "Server=tcp:quotetest.database.windows.net,1433;Database=quotes"
      }

    Hope this helps you!

    One of the common challenges when building cloud applications is managing credentials for authenticating to cloud services. The Managed Service Identity feature of Azure AD provides an automatically managed identity in Azure AD. This identity helps authenticate with cloud service that supports Azure AD authentication. In a previous post, we saw how the DefaultAzureCredential that is part of the Azure SDK’s, helps unify how we get token from Azure AD. The DefaultAzureCredential, combined with Managed Service Identity, allows us to authenticate with Azure services without the need for any additional credentials.

    In this post, let us look at how to set up DefaultAzureCredential for the local development environment so that it can work seamlessly as with Managed Identity while on Azure infrastructure. On the local development machine, we can use two credential type to authenticate.

    Using EnvironmentCredential

    The EnvironmentCredential looks for the following environment variables to connect to the Azure AD application.

    • AZURE_TENANT_ID
    • AZURE_CLIENT_ID
    • AZURE_CLIENT_SECRET

    How do we get these values?

    In Azure Portal, under the Azure Active Directory -> App Registration, create a new application. Once created, from the Overview tab, get the Application (Client) Id and the Directory (Tenant) Id. Unde, the Certificates and Secrets, add a new Client secret, and use that for the Secret.

    Now that we have all the required values, lets set up the Environment Variables. You can do this either as part of your application itself or under the Windows Environment Variables.

    PRO TIP: Have a script file as part of the source code to set up such variables. Make sure the sensitive values are shared securely (and not via the source control)

    If you want to set it from the source code, you can do something like below

    #if DEBUG
      var msiEnvironment = new MSIEnvironment();
      Configuration.Bind("MSIEnvironment", msiEnvironment);
      Environment.SetEnvironmentVariable("AZURE_TENANT_ID", msiEnvironment.TenantId);
      Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", msiEnvironment.ClientId);
      Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", msiEnvironment.ClientSecret);
    #endif

    Add the sensitive configs to the User Secrets from Visual Studio so that you don’t have to check them into source control.

    When using DefaultAzureCredential to authenticate against resources like Key Vault, SQL Server, etc., you can create just one Azure AD application for the whole team and share the credentials around securely (use a password manager).

    Using SharedTokenCacheCredential

    DefaultAzureCredential can use the shared token credential from the IDE. In the case of Visual Studio, you can configure the account to use under Options -> Azure Service Authentication. By default, the accounts that you use to log in to Visual Studio does appear here. If you have multiple accounts configured, set the SharedTokenCacheUsername property to specify the account to use.

    In my case, I have my Hotmail address (associated with my Azure subscription) and my work address added to Visual Studio. However, when using my Hotmail account to access KeyVault or Graph API, I ran into this issue. Explicitly adding in a new user to my Azure AD and using that from Visual Studio resolved the issue.

    I ran into issues when using my Microsoft account, that I use to login to Azure account. Adding in a new user to Azure AD and using that from Visual Studio got it working.

    When using this approach, you need to grant access for all members of your team explicitly to the resource that needs access and might cause some overhead.

    I hope this helps you to get your local development environment working with DefaultAzureCredential and seamlessly access Azure resources even when running from your local development machine!

    For the past couple of weeks, I have been playing around with Cypress and been enjoying the experience. Cypress is a next-generation front end testing tool built for the modern web. It is the next generation Selenium and enables to write tests faster, easier, and reliable.

    In this post, let’s look at how we can set up Cypress for a React application that runs over a FAKE JSON Server all using TypeScript. By using a Fake Server for the tests, we can guarantee the application state and the data to expect.

    Create React App

    Setting up a Create React App with TypeScript is straightforward and supported out of the box. All you need to specify is the typescript template when you create a new application (as shown below). The documentation also has steps on how to add Typescript to an existing project.

    npx create-react-app my-app --template typescript

    Setting Up JSON Server

    In the previous post we looked at how to set up a Fake REST API using JSON Server. Let’s move JSON Server and the mock data to TypeScript. It forces us to update the mock data any time the models are updated. I have JSON Server under the mockApi folder.

    npm install json-server @types/json-server typescript

    Add a tsconfig.json for the TypeScript compiler and update the server.js file to server.ts.

    {
      "compilerOptions": {
        "module": "commonjs",
        "target": "es6",
        "moduleResolution": "node",
        "esModuleInterop": true,
        "lib": ["es6"],
        "forceConsistentCasingInFileNames": true
      },
      "include": ["**.ts"]
    }
    // server.ts
    import jsonServer from 'json-server';
    import data from './mockData';
    
    const server = jsonServer.create();
    const router = jsonServer.router(data);
    ...
    server.use(router);
    server.listen(5000, () => {
      console.log('JSON Server is running');
    });
    

    For the mock data, use the API Model DTO type definitions if you have a swagger definition for the API’s, use NSwag to generate TypeScript definitions. Generating the definitions can be scripted or be done using the NSwag Studio.

    const quotes: QuoteDto[] = [
      {
        id: "1",
        statusCode: QuoteStatusCode.Draft,
        lastModifiedAt: new Date("2-Mar-2020"),
        customer: {...},
        mobilePhone: null,
        accessories: [],
      },
    ];
    

    npx ts-node server.ts’ Start Mock API

    Depending on how you have the React app calling the API, you can set it up to use the Mock API Server that we are running. If the Web App and API are served from the same host and port usually, you can proxy the requests to JSON Server by setting the proxy field in package.json.

    Setting Up Cypress

    The Cypress docs are well explained and have a step by step walkthrough to set up Cypress tests. I have Cypress installed under the web application folder.

    npm install --save-dev cypress @testing-library/cypress @types/testing-library__cypress

    Cypress comes with default test examples. If the example tests are not showing up for you, try running ‘cypress open’ (or run), which should generate them. You can ignore the tests from running in the cypress.json file.

    {
      "ignoreTestFiles": "**/examples/*.js",
      "baseUrl": "http://localhost:3000"
    }

    Below is the folder structure that I have - mockApi (JSON Server), cypress, and ui (create-react-app).

    Start writing Cypress tests now!

    JSON Server is a great way to set up a full fake REST API for front-end development. JSON server can be set up literally in ‘30 seconds’ and with no coding as the website claims. Capture some of the real API’s data if it already exists or create a mock data based on the API Schema in db.json file. That’s all to do, and we have an API with full CRUD capabilities

    However, it’s not always that you can use something straight out of the box to fit all conditions and constraints of your API. In this post, let’s look at customizing and configuring JSON Server for some commonly occurring scenarios.

    Setting up JSON Server

    JSON Server can be used as a module in combination with the other Express middlewares when it needs to be customized. JSON server is built over Express, a web framework for Node.js. To set it up as a module add a server.js file to your repository with the below setup code as from the docs.

    // server.js
    const jsonServer = require("json-server");
    const server = jsonServer.create();
    const router = jsonServer.router("db.json");
    const middlewares = jsonServer.defaults();
    
    server.use(middlewares);
    
    // Have all URLS prefixed with a /api
    server.use(
      jsonServer.rewriter({
        "/api/*": "/$1",
      })
    );
    
    server.use(router);
    server.listen(5000, () => {
      console.log("JSON Server is running");
    });
    

    Start up the server using ‘node server.js’.

    Mostly I have my API’s behind the ‘/api’ route. Add a rewriter rule to redirect all calls with ‘/api/*’ to the root ‘/$1’. The ‘$1’ represents all that is captures by the ‘_’. E.g., A call to ‘localhost:5000/api/quotes’ will now be redirected as ‘localhost:5000/quotes’ where the JSON server has all the data available through the db.json file.

    Setting up and Organizing Mock Data

    When using a JSON file (db.json) as the mock data source, any changes made using POST, PATCH, PUT, DELETE etc updates the JSON file. Most likely, you will be using source control (if not you should), and this means reverting the changes to the db.json file every time. I don’t like doing this, so I decided to move my mock data as an in-memory JSON object.

    The router function takes in a source that is either a path to a JSON file (e.g. 'db.json') or an object in memory. Using an in-memory object also allows organizing our mock data into separate files. I have all my mock data under one folder with an index.js file that serves the in-memory object, as below.

    // index.js file under mockDate folder
    // quotes, users, products, branches etc are in other
    // files under the same folder
    
    const quotes = require("./quotes");
    const users = require("./users");
    const products = require("./products");
    const branches = require("./branches");
    
    module.exports = {
      quotes,
      users,
      products,
      branches,
    };
    

    Pass the in-memory object to the router as below

    const data = require("./mockData");
    const router = jsonServer.router(data);
    

    Since this is an in-memory object, any changes made to it are not persistent. Every time the server starts, it uses the same data served from the ‘index.js’ file above.

    Summary and Detail View Endpoints

    Another common scenario is to have a list view and a detailed view of the resources. E.g., We have a list of quotes, and clicking any will open the detailed view. The data representation for the detail and list view are often different.

    '/api/quotes'  -> Returns list of Quote Summary
    '/api/quotes/:id' -> Returns Quote Details

    By overriding the render method of the router, we can format the data separately for the list view and the detail view. Below I intercept the response if the route matches the list API endpoint and transform the data into the summary format.

    router.render = (req, res) => {
      let data = res.locals.data;
    
      if (url === "/api/quotes" && req.method === "GET") {
        data = data.map(toQuoteSummary);
      }
      res.jsonp(data);
    };
    
    const toQuoteSummary = (quote) => ({
      id: quote.id,
      scenarios: quote.scenarios,
      quoteNumber: quote.quoteNumber,
      statusCode: quote.statusCode,
      lastModifiedAt: quote.lastModifiedAt,
      customerName: quote.customer && quote.customer.name,
      mobilePhoneDescription: quote.mobilePhone && quote.mobilePhone.serialNo,
    });
    

    JSON Server delivers what it promises and is easy to set up and customize. If you have the original API running, capture the API request to generate mock data. Strip out any sensitive or PII information before checking it into source control.

    Here is an example repository, where I have been setting up a Fake API to drive a front-end application, cypress tests, and more.

    Hope this helps you get started with JSON Server and mock your APIs.

    Photo by Taylor Vick on Unsplash https://unsplash.com/photos/M5tzZtFCOfs