HOW TO: ZIP Multiple CSV Files In ASP.NET

    Learn how to generate a ZIP file containing many CSV files in an ASP.Net Web API application. The same approach is useful to zip files of any type in any .NET application using CSharp.

    Recently at a client, I had to generate many CSV files from an API endpoint. The user will download the CSV files as a ZIP archive file.

    So here is one way to do it if you (including my future self) ever run into a similar functionality.

    How to create a zip file in ASP.NET

    Creating CSV File In-Memory

    CSV files are a good option if you want to share data while allowing it to be opened in Excel or Google Sheets as well. Both are popular applications among the business and used by almost everyone.

    In CSharp, the best way I have come across to generate a CSV file is to use CsvHelper

    CSVHelper is a .Net library for reading writing CSV files. It is extremely fast, flexible, and easy to use.

    CSVHelper is available as a NuGet package and easy to get started. I usually prefer representing the CSV file record as a CSharp class.

    In this case, I had to generate multiple CSV files grouped by StoreName with delivery details for the day. The DeliveryJobRecord represents on record in the CSV file.

    public class DeliveryJobRecord
    {
        public string StoreName { get; set; }
        public string OrderNo { get; set; }
       ...
    }

    Using the CSVWriter class, generate a CSV file for a list of records.

    In this case, I do not want any physical files on the server, so I am using the MemoryStream to generate the files. The WriteRecords method on the CSVWriter writes out the data in CSV format to the memory stream.

    foreach (var store in storeGroup)
    {
        byte[] bytes;
        using (var ms = new MemoryStream())
        {
            using (var writer = new StreamWriter(ms))
            {
                using (var csv = new CsvWriter(writer))
                {
                    csv.WriteRecords(store.ToList());
                }
            }
    
            bytes = ms.ToArray();
        }
    
        deliveryFiles.Add(new File
        {
            Bytes = bytes,
            FileName = $"{store.Key} {deliveryDateTime:dd-MM-yyyy} - Delivery.csv"
        });
    }

    Creating ZIP file In-Memory

    Now that we have a list of delivery files in memory, which is a list of CSV files, add them to a ZIP archive.

    To generate a zip file, use the ZipArchive class that is part of the System.IO.Compression namespace. It allows for creating a new ZipArchive by passing a stream. Since I intend to return the archive file in the same HTTP call, I am using a memory stream.

    The CreateEntry function allows adding a new file to the zip archive. The function returns a ZipArchiveEntry, which allows writing the CSV files to it.

    var compressedFileStream = new MemoryStream();
    using (var zipArchive = new ZipArchive(compressedFileStream, ZipArchiveMode.Create, true))
    {
        foreach (var deliveryFile in deliveryFiles)
        {
            var zipEntry = zipArchive.CreateEntry(deliveryFile.FileName);
    
            using (var originalFileStream = new MemoryStream(deliveryFile.Bytes))
            using (var zipEntryStream = zipEntry.Open())
            {
                originalFileStream.CopyTo(zipEntryStream);
            }
        }
    }
    
    return new File()
    {
        Bytes = compressedFileStream.ToArray(),
        FileName = $"Delivery Details.zip"
    }

    Looping around all the delivery files, add them to the ZipArchive. Zip files are binary data, so return it as ‘application/octet-stream’ in the API endpoint.

    [HttpGet]
    public IActionResult DownloadDeliveriesForToday()
    {
        var zipFile = GetZippedFile(DateTime.UtcNow);
        return File(zipFile.Bytes, "application/octet-stream", zipFile.FileName);
    }

    The full source code is available here

    The same approach applies to create archive files containing any file types. I hope this helps you with creating zip archive files in .NET applications using Csharp.

    Simulate UI Scenarios For Front-End Development

    Set up front-end application to switch between different UI states. Simulate all possible scenarios using a fake API server.

    In a previous post, Simulating Different Scenarios Using Fake JSON Server API, I showed how to set up a fake API to return data based on different UI state. E.g., Given a UI list view, the application can be in different states. It can show an empty list, a list of data, a list of data that does not fit in one page, a server error, etc.

    In this post, let us look at how to set up the Front-end application to switch between these different states. We will see how to define scenarios, switch between them, and pass it to the API for every request.

    Pass Scenarios to API

    Scenarios are determined based on the ‘scenarios’ header (choose a different name if you like) from the HTTP Request. To inject this header, we need to be able to intercept the API requests. I usually prefer to have all API requests to be under a single folder and use an abstraction over the HTTP library of choice. Most HTTP libraries, provide extension points to intercept requests before sending them. Use these interception points, to inject scenarios into HTTP request header.

    Below I use Axios, a promise-based HTTP client to make requests to API. Using the Interceptors feature of Axios, I inject the scenarios header for each request. It uses a getSelectedScenario helper function to get the currently selected scenario to simulate.

    // http.ts
    const http = axios.create();
    
    if (process.env.NODE_ENV === "development") {
      http.interceptors.request.use(
        async (request) => {
          const selectedScenario = getSelectedScenario();
          request.headers["scenarios"] = selectedScenario
            ? selectedScenario.scenarios.join(" ")
            : "";
          return request;
        },
        (error) => Promise.reject(error)
      );
    }
    
    export default http;

    When making requests to the API, use the exported http instance as shown below. All HTTP requests flow through the interceptors and will have the scenarios header injected.

    // quotes.api.ts
    import http from "./http";
    
    export async function loadAllQuotes(): Promise<QuoteSummaryDto[]> {
      const response = await http.get<QuoteSummaryDto[]>("/api/quotes");
      return response.data;
    }

    Define and Manage Scenarios

    The getSelectedScenario helper function retrieves header from the storage of your choice. It can be in memory, local storage, shared JSON file, etc. Local Storage is my personal choice, as it allows me to persist the interested values across browser sessions and integrates with browser developer tools.

    The selectedScenarioGroup in local storage determines the current list of scenario headers to send to the fake API server. We can change the list of scenarios by modifying the values for this key in the local storage.

    To make changing and defining scenarios interactive, I created a small form that plugs into the UI. The ‘+’ icon shown below, shows during development and expands out to a Sceanrio Form Builder. The form allows to add new scenario groups by adding in a name and selecting the associated scenarios. It saves the scenario groups to local storage as ‘scenarioGroups’. With the scenario form builder, I can easily define new scenarios from the UI and start developing against those cases.

    As you develop new features, the new scenarios can be added along with the source code to make it is available across the team. Every time the app starts, it will merge the scenarios in source code with the ones existing in the local storage. By being able to simulate different scenarios, front-end development becomes more seamless when used along with JSON Server fake API. The scenarios make it possible to simulate all edge case scenarios and develop for them. These scenarios also helps in setting up data when writing tests

    I hope this makes your front-end development more enjoyable!

    First things first, Kindly subscribe to my channel.

    It’s been three-plus years since I published the first YouTube video, and here are my experiences recording the first screencast. Things have changed a lot since then, so I thought of putting this post together on my current setup.

    In this post, I will walk through my current YouTube setup, the equipment, software, and the recording process.

    Equipment Setup

    Equipment does play an essential part in the quality of the video. However, that is not the only thing. Don’t wait for the best gear to start creating. That is just another way to procrastinate. Add more stuff as you go along.

    That said, here is what my recording gear looks like now.

    Audio

    Get a decent mic.

    It is the one piece of advice that you see and hear when looking out to start recording videos or screencasts. Even though I was not sure of being it in for the long term, I got one of the best microphones at the time. It was costly, but worth it.

    The Rode Podcaster is a dynamic USB mic and is a popular choice amongst many people out there. It’s pricey (depends on your budget though) but delivers excellent value. It cuts off most of the room noise and takes in only your voice. You need to get close to the mic, so a boom arm helps.

    One other cheaper and popular option that came up when researching for microphones is the Blue Yeti. Check it out if you don’t want to spend a lot.

    Video

    Most of my videos are screencasts. It means the main content is the computer screen. When I started out, I recorded only my screen and audio. After a couple of videos, I wanted a video of myself as a [pip video] - where I appear in a small box to the bottom.

    I started with a Logitech c922 HD camera but soon moved on to a DSLR setup inspired by Hanselman. I use a Sony Alpha a6000, with Elgato CamLink to live stream video from DSLR to the computer. With the dummy battery, the camera never runs out of battery, and it is one less thing to worry about when recording. I mount the camera on the stand that came along with my lights. I am looking at options to move this to a separate mount some time - the Elgato Multi Mount along with the Flex Arm Kit looks excellent.

    Lights

    Lighting is crucial when shooting yourself. I record in our spare bedroom, which, unfortunately, does not get any sunlight. Even if did, it would not be of much good, as I do all my recording early morning. I got myself a 14” ring light by Neewer. Currently, I have it behind my monitor, and it does an excellent job lighting up my face. But I am looking at ways to improve the framing and the light setup.

    Software Tools

    Windows is my primary work machine and is where I record my videos. So most of the software is specific to it. I use Camtasia, for recording the screen, Audacity for the sound and the default Camera App in Windows for the video from the DSLR.

    Before recording I switch my primary 4k monitor to 1980px x 1080px(Full HD) resolution. It is important to record, edit, and publish in the same size settings. All three software record the audio. When editing, it makes it easy to align the videos together.

    Record, Edit and Publish in the same size

    Camlink used to crash/freeze randomly during recording. This problem resolved after connecting Camlink directly to my laptop instead of the dock station. Reddit says it’s most likely Camlink overloading the USB link.

    PRO TIP: Snap you fingers, like a clapperboard. The sharp spike in the audio waveform helps to align.

    I process the audio in Audacity - mostly sticking to Normalize, Amplify, and Noise Reduction. For the past couple of videos, I have been using Auphonic, an automatic audio post production web service. I am currently on the 2 hours per month trial with Auphonic.

    Most of the editing happens in Camtasia, where I import the audio, video, and the screen recording into the same project. Layer them up, depending on how you want it in the final video. I have the camera video on top and the screen below it. The audio position does not matter. Once all the different layers are aligned, I separate the audio and video for the camera and screen recordings and delete them.

    I upgraded to Camtasia 2019 from Camtasia 8 recently and been liking the new interface and experience. I love the overall UI and the layout better; the default library assets are way better, better keyboard shortcuts support, and the Alt + drag to extend a frame is a real bonus. Editing takes a significant portion of my overall workflow. I am trying to bring it down and optimize it.

    YouTube Thumbnails

    I use Canva for making thumbnails for YouTube. Canva has an inbuilt template for YouTube thumbnails and makes it easy. For stock photos for the thumbnail, I mostly use Pixabay and UnSplash. I am on the free version of all the above software. I usually put the name and details of the photos in the description for the video.

    Planning and Recording Workflow

    I use Notion for most of my notes and planning. I recently migrated to Notion from OneNote and like it way better. I keep a list of video ideas and a rough plan on when to publish it. The week before recording a video, I script out the main flow of the video along with any associated code. Most of the time, I try to align videos along with my blog posts on the same topic. It allows reusing the code examples for both the video and the blog.

    I record my videos early in the morning and try to finish the full recording in one go. The videos average around 10 minutes and take around an hour to record. I try to do the editing in batches as when I get time.

    When asked for tips for beginner YouTuber, someone said, “Record the first 100 videos as fast as possible. Don’t look at the quality but focus on the quantity.”. So if you are still on the fence line, don’t wait any longer.

    This is a journey. I am just beginning!

    Do you have a YouTube channel, and what is your workflow? Sound off in the comments.

    * Some links are affliated

    Connect .Net Core To Azure Key Vault In Ten Minutes

    Access secrets in Azure Key Vault from .Net Core and learn how to elegantly handle when rotating secrets.

    Azure Key Vault is a cloud-hosted service for managing cryptographic keys and secrets like connection strings, API keys, and similar sensitive information. Key Vault provides centralized storage for application secrets. Check out my posts on Key Vault if you are new to Azure Key Vault and want to learn more.

    In this post, I will walk-through how to access Secrets in an Azure Key Vault from a .Net Core Web application. The Web Application has an API endpoint that drops a message to Azure Storage Queue. It uses a connection string in Azure Key Vault to connect to Azure Storage Queue. The application also gracefully handles rotating Secrets, retiring the old connection string, and replacing with a new one, without needing to restart the application.


    The application uses an AzureQueueSender to drop messages to the Storage Queue. Just like usual, it gets the Connection String value from the application configuration, using the .Net Core IConfiguration library. Since the Connection String is sensitive information, you should keep this out of source control. Usually, this would be by storing it as User Secrets in the local environment and using Variable replacement in Azure DevOps for any deployed environment like dev, test or prod.

    public class AzureQueueSender : IMessageSender
    {
        public AzureQueueSender(IConfiguration configuration) {...}
        ...
    
        public async Task Send(string content)
        {
            var connectionString =
                Configuration.GetValue<string>("QueueConnectionString");
    
            await SendMessage(connectionString);
        }
    
        private static async Task SendMessage(string connectionString)
        {
            var storageAccount = CloudStorageAccount.Parse(connectionString);
            storageAccount.CreateCloudQueueClient();
            var queueClient = storageAccount.CreateCloudQueueClient();
            var queue = queueClient.GetQueueReference("youtube");
            var message = new CloudQueueMessage("Hello, World");
            await queue.AddMessageAsync(message);
        }
    }

    Even though this is an acceptable solution these days, we can do better. Managing the connection string, rotating, and updating its values should be done independently of the application. We should not need to restart the app when we need to do that. Think if there are multiple such applications, then we need to restart each of those applications when we change the Secret.

    Guess what the easy solution we usually come up with - Let us not change the connection string or any of the Secrets and keep it the same.

    The whole process is not optimized for change and our immediate reaction to resist it.

    Moving Secrets To Key Vault

    Azure Key Vault provides centralized storage for application secrets. To move the connection string to Key Vault, head to Azure Portal, and create a new Key Vault. Under Secrets, create a new Secret with name ‘QueueConnectionString’, the same as that we used in our application configuration. Update the value for the Secret and save.

    .Net Core comes with an Azure Key Vault Configuration Provider, to retrieve secrets from the Key Vault. It allows us to define configuration key in appsettings.json and access them just like any other configuration value but have it read from Key Vault instead. To wire up the Key Vault configuration provider add a Nuget reference and update the Program.cs file to configure the application to use Key Vault, as shown below.

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            })
        .ConfigureAppConfiguration((context, config) =>
        {
            var builtConfig = config.Build();
            var vaultName = builtConfig["VaultName"];
            var keyVaultClient = new KeyVaultClient(
                async (authority, resource, scope) =>
                {
                    var credential = new DefaultAzureCredential(false);
                    var token = credential.GetToken(
                        new Azure.Core.TokenRequestContext(
                            new[] { "https://vault.azure.net/.default" }));
                    return token.Token;
                });
            config.AddAzureKeyVault(
                vaultName,
                keyVaultClient,
                new DefaultKeyVaultSecretManager());
        });

    The above code uses the VaultName from the configuration file (which is not sensitive information and can be managed as Release Variables for different environments) and creates a KeyVaultClient instance. It uses DefaultAzureCredential to retieve an Azure AD token, that is used to authenticate with Azure Key Vault. DefaultAzureCredential unifies the way we retrieve an Azure AD token and works seamlessly on local development environment as well.

    When the application starts, it looks for all matching configuration names in the associated Vault and retrieves the Secrets and caches them. The IConfiguration provides the same interface over all the different scenarios and makes it possible for the application to have the same code just as if it was in the configuration file.

    Handling Secret Rotation

    When the Connection String needs to be updated, we can do that in the Azure Key Vault without needing to update and redeploy the application. However, since the application caches the Secret from Key Vault on application startup, we need to restart the app to refresh the Key Vault cache. This is not ideal. When configuring Azure Key Vault as the configuration source, we can specify a ReloadInterval. It will reload the Secrets from the Key Vault every time the Reload Interval duration is over, in the below case every 10 minutes.

    config.AddAzureKeyVault(new AzureKeyVaultConfigurationOptions()
    {
        Client = keyVaultClient,
        Vault = vaultName,
        ReloadInterval = TimeSpan.FromMinutes(10)
    });

    Even with the ReloadInterval set, there is still a time window where the call to Azure Storage will fail; the time between the Secret in the Vault is updated, and the next reload time. Sure it is not much time, but a failed request is a failed request. To handle this scenario, let’s add some extra code to the code to gracefully refresh the configuration values from the Key Vault when it throws an unauthorized exception.

    Using Polly, a .NET resilience and transient-fault-handling library , we can add a policy to wrap the call to Azure Storage Queue. The CloudStorageAccount throws a StorageException any time there is Unauthorized access. Using Polly, we can handle the exception and force refresh the Secrets in IConfiguration by calling the Reload method. Once updated, we can get the connection string again from the config, which will be the new updated value in the Vault. It is then used to connect and drop a message to the queue. The application now gracefully handles the case where the Secrets is updated in Key Vault and refreshes its cache of the Secrets in Azure Key Vault.

    public async Task Send(string content)
    {
        var connectionString =
            Configuration.GetValue<string>("QueueConnectionString");
        var retryPolicy = Policy.Handle<StorageException>()
            .RetryAsync(2, async (ex, count, context) =>
            {
                (Configuration as IConfigurationRoot).Reload();
                connectionString =
                Configuration.GetValue<string>("QueueConnectionString");
            });
    
        await retryPolicy.ExecuteAsync(() => SendMessage(connectionString));
    }

    You can easily connect your existing or new applications to start using Key Vault as a configuration source. With the Key Vault Configuration provider, the changes to the application code are very minimal. Having the sensitive information in Key Vault allows to keep this centralized and managed separate to your application. This is also a more secure way to store sensitive information.

    Do you store your application connection strings in the Key Vault? Move them right now - it’s just going to take you ten minutes!

    Cypress is a next-generation front end testing tool built for the modern web. It is the next generation Selenium and enables us to write tests faster, easier, and reliable. Some of the compelling features that I find interesting with Cypress are

    • Time Travel: Cypress takes snapshots while running the tests and enables hovering over each to see the application state during each step.

    • Real-time reloads: Cypress automatically reloads any time a test is changed.

    • Automatic waiting: No more adding waits and sleeps to the tests. It was one thing I hated about writing Selenium tests, and Cypress automatically does that work. It works awesome!

    • Familiar tools: Writing tests is a breeze with Cypress and is built on top of exiting tools and frameworks.

    Cypress comes with a lot more features and is worth checking out. In this post, we will look at how to get started with Cypress and a few approaches to set up data required for testing the application.

    Installation and Setup

    The Cypress docs are well explained and have a step by step walkthrough to set up Cypress tests. I have Cypress installed under the web application folder.

    npm install --save-dev cypress @testing-library/cypress @types/testing-library__cypress

    Cypress comes with default test examples. If the example tests are not showing up for you, try running ‘cypress open’ (or run), which should generate them. You can ignore the tests from running in the cypress.json file.

    {
      "ignoreTestFiles": "**/examples/*.js",
      "baseUrl": "http://localhost:3000"
    }

    With Cypress installed in your project, use one of the approaches mentioned to open Cypress. I prefer to add a npm script to package.json and use that to launch Cypress. The Cypress test runner automatically finds all the tests and displays them. The runner also detects the available browsers on the client machine and shows an option to choose from them.

    Writing First Test

    Typically my first tests are to check the whole set up is working fine. So testing the app is launching, and rendering the expected elements is a good start. In this case, we are displaying a list of quotes and a ‘Create Quote’ button. A good test might be to test for the existence of the button and the title text.

    Add a new file (quotes.spec.ts) under the cypress/integration folder. I prefer grouping tests into folders under the integration folder for better management. The findByText from the Cypress Testing Library checks for the existence of elements with the specified text, in our case, the header label and the button text.

    describe("Quotes", () => {
      it("Loads home page", () => {
        cy.visit("/");
        cy.findByText("All Quotes");
        cy.findByText("Create Quote");
      });
    });

    The Cypress runner automatically detects the new file and displays it in the UI. Click that to execute the tests against the selected browser.

    Congrats, you have the first test running, and Cypress set up. Check in the code!

    Mock Data Approaches

    Most of the apps have a backend API that serves data. Setting up the data is essential to write more tests and is often a painful task. Let us explore a few different options to set up data for testing.

    The cy.server starts a server to route requests and allows us to change their default behavior. The cy.route, helps intercept network calls and return mock data.

    Inline Data

    Setting up inline data is the easiest when starting to write tests. The test uses the JSON object as mock data. Below the network call to ’/api/qoutes’ in intercepted to return an empty array of quotes. When there are no quotes, we expect the application to show a message indicating that.

    it("No quotes shows empty message", () => {
      cy.server();
      cy.route("GET", "/api/quotes", []);
    
      cy.visit("/");
    
      cy.get("[data-cy=noquotes]");
      cy.findByText("There are no matching Quotes.");
    });

    NOTE: I am using the ‘data-*‘ attribute to select elements, and it helps isolate the elements from CSS and JS changes. Check out the recomended practices for selecting elements

    Fixture

    To reuse the same test data across different tests (possibly in different files), save it as an external file under the cypress/fixtures folder. Add a new file ‘quotes/quotes.json’ with some quotes data in it. The cy.fixture command helps loads the data from the file. Using aliasing, the data is used to mock the route call (by specifying ‘@quotes’ to indicate the fixture aliased data). The data can also be used as a JSON object as shown and then used to test the different UI elements are rendered as expected.

    it("Renders quotes as expected", () => {
      cy.server();
      cy.fixture("quotes/quotes.json")
        .as("quotes")
        .then((quotes) => {
          cy.route("GET", "/api/quotes", "@quotes");
    
          cy.visit("/");
    
          const renderedQuotes = cy.get("tbody > tr");
          renderedQuotes.should("have.length", quotes.length);
          renderedQuotes.each((renderedQuote, index) => {
            cy.wrap(renderedQuote).within(() => {
              const quote = quotes[index];
              cy.get("[data-cy=quoteNumber]")
                .invoke("text")
                .should("eq", quote.quoteNumber || "");
              cy.get("[data-cy=customerName]")
                .invoke("text")
                .should("eq", quote.customerName || "");
              cy.get("[data-cy=mobilePhoneDescription]")
                .invoke("text")
                .should("eq", quote.mobilePhoneDescription || "");
              cy.get("[data-cy=statusCode]")
                .invoke("text")
                .should("eq", quote.statusCode || "");
              cy.get("[data-cy=lastModifiedAt]")
                .invoke("text")
                .should("eq", quote.lastModifiedAt);
            });
          });
        });
    });

    JSON Server Fake API

    Using a Fake API is helpful when developing front-end applications and remove the dependency to build out the whole backend servers. Once agreed on an API Spec, we can build out the two parallel without any dependency. JSON Server is a great way to set up a full fake REST API for front-end development. JSON server can be set up literally in ‘30 seconds’ and with no coding as the website claims.

    Using a Fake API not only speeds the front-end development, it also helps write tests. With JSON Server, we can guarantee that the data will always be reset to the initial state every time we start the fake API server. We can also simulate different sceanrios and have the API return appropriate data in each case. It makes development and writing tests against those scenarios easy. E.g., We can have different scenarios like API call returns empty quotes or API call return a few records or API call return lots of records and needs multiple pages or even one where it errors out. If the Fake API is setup to return appropriate data for each of the scenarios, it can be used during development and testing.

    it("Error getting quotes shows error message", () => {
      cy.setScenarios("error-quotes");
      cy.visit("/");
      cy.get(".Toastify");
      cy.findByText("Unable to get data.");
    });

    cy.setSceanrios is a Cypress custom command I have added. For all network requests made, it sets a specific HTTP header value, which is used by JSON Server to determine what data to return. Check out the Simulating Different Scenarios Using Fake JSON Server API post for more details on how to set up JSON Server to return data based on the scenarios specified. In the above test, it simulates a server error scenario and expects a toast message shown with the appropriate details.

    I find setting up a fake API server as a compelling way to go forward - a twofer - it helps with the development and testing. If you are setting up Cypress on an existing project, it might be easier to set up inline mock data or using Fixtures. Capture responses from the real API and use that in the fixture or inline mock data.

    Hope this helps you to get started with Cypress automation testing.