Tag: software-development

  • Getting started with PactumJS: Project Structure and Your First Test Case

    Getting started with PactumJS: Project Structure and Your First Test Case

    Introduction

    As discussed in the previous article, PactumJS is an excellent choice for API automation testing. 

    As your API testing suite grows, maintaining a clean and organized repository structure becomes essential. We’ll explore a folder structure for your PactumJS-based testing framework, provide tips and tricks for configuration and scripting, and walk through executing tests with reporting.

    For demonstration, we’ll use the Restful Booker API as our test target.

    Set Up Your Project and Install Dependencies

    Prerequisites

    To follow along, make sure you have the following:

    1. Node.js v10 or above
    2. Basic understanding of JavaScript or TypeScript
    3. Node.js modules
    4. Testing frameworks like Mocha

    If you’re new to any of the above, it’s worth reviewing basic tutorials, for example, on Automation University: on Node.js and test runners like Mocha.

    Install Dependencies

    Start by creating a fresh Node.js project:

    mkdir api_testing_with_pactumjs
    cd api_testing_with_pactumjs
    npm init -y

    Then install necessary packages via NPM:

    # install pactum
    npm install -D pactum
    
    # install a test runner
    npm install -D mocha

    Organise your files

    api_testing_with_pactumjs/
    ├── helpers/
    │   └── datafactory/
    ├── tests/
    │   └── auth.spec.ts
    ├── setup/
    │   └── base.js
    ├── .env.example
    ├── .gitignore
    ├── README.md
    ├── package-lock.json
    └── package.json
    1. tests/ folder contains your test specifications organized by feature or endpoint, such as auth.spec.ts. This keeps tests modular and easy to locate.
    2. helpers/ folder houses centralized reusable logic and utilities. This separation keeps test files focused on what they test rather than how, improving readability and maintainability.
    3. setup/ folder contains global setup files like base.js to configure common test environment settings, such as base URLs and global hooks.
    4. .env.example — A sample environment configuration file listing required environment variables, serving as a reference and template for developers.
    5. .env (not shown in repo) is used locally to store sensitive configuration and secrets, enabling easy environment switching without code changes.
    6. .gitignore file includes folders and files like .env to prevent committing sensitive data to version control.
    7. package.json is a central place for managing project dependencies (like pactum, dotenv, mocha) and defining test scripts (e.g., npm run test, npm run test:report). This facilitates CI/CD integration and consistent test execution.

    Write a Basic Test

    As an example for our demo we will take the Restful-Booker Platform built by Mark Winteringham. This application has been created for bed-and-breakfast (B&B) owners to manage their bookings.

    To explore and test the available API endpoints, you can use the official Postman Collection.

    Let’s write our first set of API tests for the /auth/login endpoint which generates a token for an admin user.

    Endpoint: POST /api/auth/login

    Base URL: https://automationintesting.online

    User Context

    User Role: Admin (default user)

    Credentials Used:

    • username: “admin”
    • password: “password”

    Request:

    Method: POST

    Headers: Content-Type: application/json

    Body:

    {
      "username": "admin",
      "password": "password"
    }

    Expected Response:

    HTTP Status: 200 OK

    / tests/authenticate.spec.js
    import pkg from 'pactum';
    const { spec, stash } = pkg;
    
    describe('/authenticate', () => {
    
        it('should succeed with valid credentials', async () => {
            await spec()
                .post('https://automationintesting.online/api/auth/login')
                .withJson({
                    username: 'admin',
                    password: 'password'
                })
                .expectStatus(200)
        });
    });

    While this test currently focuses on verifying the status code, future articles will enhance it by adding validations for the authentication token returned in the response.

    Manage Environment Variables

    Create .env file

    To keep sensitive data like URLs and credentials, create a .env.example file as a reference for required environment variables:

    BASE_URL=""
    USERNAME=""
    PASSWORD=""
    👉 Tip: Don’t commit your actual .env to version control
    • Use .env.example to document the required variables.
    • Add .env to your .gitignore file to keep credentials secure.
    • Share .env.example with your team so they can configure their environments consistently.

    Load Environment Variables in Tests

    Install dotenv and configure it in your test files or setup scripts:

    npm install --save-dev dotenv

    Example test with environment variables:

    // tests/authenticate.spec.js
    
    import pkg from 'pactum';
    const { spec } = pkg;
    import dotenv from 'dotenv';
    dotenv.config();
    
    describe('/authenticate', () => {
      it('should succeed with valid credentials', async () => {
        await spec()
          .post(`${process.env.BASE_URL}/auth/login`)
          .withJson({
            username: process.env.USERNAME,
            password: process.env.PASSWORD
          })
          .expectStatus(200);
      });
    });

    Execute Test Case

    Once your test files are set up and your .env file is configured with valid credentials and base URL, you’re ready to execute your test cases.

    PactumJS works seamlessly with test runners like Mocha, which means running your tests is as simple as triggering a test command defined in your package.json. Here’s how to proceed:

    Add a Test Script

    In your package.json, add a script under “scripts” to define how to run your tests. For example:

    // package.json
    
    "scripts": {
      "test": "mocha tests"
    }

    This tells Mocha to look for test files in the tests/ directory and run them.

    Run the Tests

    In your terminal, from the root of your project, run:

    npm test

    This will execute test specs and display results in the terminal. 

    You should see output indicating whether the test passed or failed, for example:

      /authenticate
        ✓ should succeed with valid credentials (150ms)
    
      1 passing (151ms)

    Add a Reporting Tool

    By default, PactumJS uses Mocha’s basic CLI output. For richer reporting—especially useful in CI/CD pipelines—you can use Mochawesome, a popular HTML and JSON reporter for Mocha.

    Install Mochawesome

    Install Mochawesome as a development dependency:

    npm install -D mochawesome

    Update Your Test Script

    Modify the scripts section in your package.json to include a command for generating reports:

    // package.json
    
    "scripts": {
      "test": "mocha tests"
      "test:report": "mocha tests --reporter mochawesome"
    }

    This script tells Mocha to run your tests using the Mochawesome reporter.

    Run the tests with reporting

    Execute your tests using the new script:

    npm run test:report

    This generates a mocha report in JSON and HTML format which you can review locally or attach in CI pipelines.

      /authenticate
        ✔ should succeed with valid credentials (364ms)
    
    
      1 passing (366ms)
    
    [mochawesome] Report JSON saved to ./pactum_test/mochawesome-report/mochawesome.json  [mochawesome] Report HTML saved to ./pactum_test/mochawesome-report/mochawesome.html

    View the report

    Open the HTML report in your browser to visually inspect test results:

    Configure Base Test Setup (base.js)

    Create a Shared Configuration

    Create a base.js file in the setup/ directory. This file is a shared configuration used to define reusable logic like setting the base URL, request headers, or global hooks (beforeEach, afterEach). 

    // setup/base.js
    
    import pactum from 'pactum';
    import dotenv from 'dotenv';
    dotenv.config();
    
    const { request } = pactum;
    
    before(() => {
      request.setBaseUrl(process.env.BASE_URL);
    });

    Load the Setup Automatically Using –file

    To ensure this configuration runs before any tests, register the setup file using Mocha’s –file option. This guarantees Mocha will execute base.js within its context, making all Mocha globals (like before) available.

    Example package.json script:

    "scripts": {
      "test": "mocha tests --file setup/base.js"
    }

    With this in place, run:

    npm test
    👉 Tip: Simplify and DRY Up Your Test Scripts

    To avoid repeating the full Mocha command in multiple scripts, define a single base script (e.g., test) that includes your common options. Then, reuse it for other variants by passing additional flags:

    "scripts": {
      "test": "mocha tests --file setup/base.js",
      "test:report": "npm run test -- --reporter mochawesome"
    }

    This approach keeps your scripts concise and easier to maintain by centralizing the core test command. It also allows you to easily extend or customize test runs with additional options without duplicating configuration. Overall, it reduces the chance of errors and inconsistencies when updating your test scripts.

    Conclusion

    By structuring your PactumJS repository with clear separation of tests, helpers, and setup files—and by leveraging environment variables, global setup, and reporting—you build a scalable and maintainable API testing framework. This approach supports growth, team collaboration, and integration with CI/CD pipelines.

  • Contract Testing: Who’s Who in the Process

    Contract Testing: Who’s Who in the Process

    Introduction

    Today, I want to introduce you to the concept of contract testing using an analogy—buying the house of your dreams 🏡. Whether you already own your dream home or are still searching for it, you probably know the excitement and anticipation that comes with the process.

    Imagine you’ve finally found the perfect house. You’re happy to move forward, but before the keys are in your hand, it’s crucial to set clear expectations with the seller. This involves agreeing on the details: the price, the condition of the house, and any other terms. To formalize this, a contract is drawn up, and a neutral party, like a notary or bank, helps ensure everything is clear and fair.

    This scenario mirrors contract testing in software development, where a consumer (the buyer) and a provider (the seller) agree on a contract to ensure their interactions meet expectations. The contract broker (like the notary) acts as a mediator to validate and enforce these agreements.

    Let’s break this analogy down further.

    Consumer.

    In this scenario you’re a consumer. You have specific expectations: size, number of rooms, location, price, neighbourhood, etc. 

    In contract testing, the consumer is a service or application that needs to consume data or services from a provider. The consumer is usually a web or mobile application making a request to a backend service, also it could be another service calling a backend service.  

    A consumer test verifies that the consumer correctly creates requests, handles provider responses as expected, and uncovers any misunderstandings about the provider’s behavior.

    Provider

    Then, the seller is the person offering the house. They promise certain features in the house: a garden, a modern kitchen, friendly neighbourhood and so on. 

    Provider on the other side of the consumer in contract testing that promises to deliver specific data or functionality. Usually it is a backend service. 

    Contract

    The contract is the written agreement between you and the seller. It ensures both parties understand and agree on what is being provided and what is expected (e.g., the price, delivery date, features of the house).

    The contract is no different in software. The contract is a formal agreement between the consumer and provider about how they will interact (e.g., API specifications, request/response formats).

    Hmmm.. not really! Contract isn’t the same as JSON Schema. This article explains well the difference between schema-based and contract-based testing. 

    In short: A schema is a structural blueprint or definition of how data in JSON is organized. It describes the structure, format, and relationships of data. 

    But the schema does not specify how the data should be used, when it should be provided, or how the interaction between the consumer and provider should behave. It’s purely about the data format and structure.

    A contract includes the schema but also goes beyond it to define the behavioral and interaction agreements between the consumer and provider.

    Contract includes following data:

    • The name of the consumer and provider
    • Data requirements for the request
    • Interactions between consumer and provider
    • Matching rules for the dynamic values
    • Environment and deployment information

    Contract Broker

    The contract broker, like a bank or notary, helps validate and mediate the agreement. They ensure that both parties adhere to their commitments.

    In contract testing, the contract broker could be a tool or framework (e.g., Pact) that stores and validates contracts. It ensures the provider and consumer stick to their agreed-upon specifications.

    The broker helps verify the compatibility between the two parties independently, ensuring that both can work together smoothly.

    Can-I-Deploy Tool

    To enable consumers and providers to check if they can deploy their changes to production, Pact provides a command-line interface (CLI) tool called can-i-deploy, which enables consumer and provider to determine the verification status of the contract.

    Contract testing approaches

    There are mainly two ways to approach contract testing:

    • The consumer-driven contract testing (CDCT) approach
    • The provider-driven contract testing (PDCT) approach

    In these series I am going to discuss traditional CDCT approach.

    Consumer-Driven Testing

    In the consumer-driven approach the consumer is driving the contract. As a consumer before finalizing the house purchase, you might inspect the house to confirm it meets your expectations and publish your expectations as a contract to the broker. On another side,  the seller must ensure their house is as described in the contract and ready for sale. This is like provider-side testing, ensuring they deliver what the contract specifies.

    Contract testing ensures that consumers (buyers) and providers (sellers) are on the same page regarding their expectations and deliverables, with a broker (notary or bank) facilitating the process. This approach reduces the risk of miscommunication and ensures smooth collaboration—whether you’re buying a house or building software systems.

    Conclusion

    Contract testing acts as the bridge between consumers and providers, ensuring smooth collaboration. Much like finalizing the purchase of your dream house, both parties agree on a contract that outlines expectations and deliverables, with a broker ensuring everything aligns. Whether you’re buying a house or developing software, clear agreements lead to smoother outcomes!

    Next, we’ll explore the application under test and hit the ground running with implementation!

  • “Shift-Left” Testing Strategy with Contract Testing. Introduction.

    “Shift-Left” Testing Strategy with Contract Testing. Introduction.

    The Inspiration Behind This Series

    At the end of 2024, I ordered a book Contract Testing in Action and had been waiting for the right moment to start exploring. Recently, I finally read through most of its chapters and found its insights to be handy for development teams working with microservice architectures. Inspired by the knowledge and ideas from the book, I decided to write a series of articles to share what I’ve learned and explore how these concepts can be effectively applied in real-world scenarios. This introductory article serves as the starting point, explaining what contract testing is and how it fits into a broader testing strategy.


    Testing Strategy for Microservices.

    Over the past few decades, microservice architecture became a crucial way of building modern, scalable, and reliable applications. Traditional monolithic systems, where the database and all business logic seats under a single, tightly coupled structure, have gradually taken a backseat. In their place, independently deployable and modular services—known as microservices — have became the foundation of contemporary software development. This shift enables product teams to deliver features faster and more efficiently. However, with this huge leap comes the challenge with ensuring that each microservice operates correctly both in isolation and as part of larger systems. So, planning and executing testing strategies becomes an important component of the development lifecycle. 

    The most widespread scheme of testing of any system is the one which is proposed by  Mike Cohn in his book ‘Succeeding with Agile’.

    Source: https://semaphoreci.com/blog/testing-pyramid

    Microservices often rely on APIs to exchange data, with some services acting as providers (offering data) and others as consumers (requesting and processing data). Without a clear and tested agreement—or contract—between these services, even minor changes in one service can lead to failures across the system. This is where contract testing becomes invaluable and should be included into the pyramid as well. 

    Here is the adjusted version of the pyramid:

    Why is contract testing so important?

    Let’s take a real-life example with a banking application. Imagine a banking application with the following teams and components:

    1. Frontend Application (Consumer):
      Built by a frontend team, this React-based web app allows customers to view their account balance and transaction history by making API calls to the backend.
    2. Backend API (Provider):
      Managed by a backend team, the API provides endpoints for account details, including:
      • /account/{id}/balance – Returns the account balance.

    The frontend app is integrated with the backend API, expecting the following responses:

    Account Balance Endpoint (GET /account/{id}/balance):

    {
      "accountId": "12345",
      "balance": 5000
    }

    On Friday Evening the backend engineer decides to improve the /account/{id}/balance response to rename accountId to id. The new response structure looks like this:

    {
      "id": "12345",
      "balance": 5000
    }

    The engineer deploys the change, thinking it’s a harmless addition. No contract tests are in place to verify compatibility with the frontend.

    Result:

    The frontend app’s code does not recognise the renamed accountId field and instead tries to access id under the old key. This results in an error when parsing the JSON response, as the frontend is still expecting the accountId field. As a result, the frontend fails to display the account balance and shows a blank page or an error message to customers.

    Impact Over the Weekend:

    • Customers are unable to check their account balance, leading to frustration and confusion.
    • The frontend team was unaware of the backend change until Monday morning, as there were no contract tests in place to alert them about the breaking change.
    • The downtime disrupts the customer experience, potentially destroying trust in the banking application and impacting the reputation of the service.

    What could be done better?

    With contract testing, the frontend and backend teams define a clear agreement (the “contract”) about API interactions, specifying expected fields and data types. Before deployment, both consumer (frontend) and provider (backend) teams run tests to ensure compatibility, catching issues early. By integrating contract tests into the CI/CD pipeline, breaking changes are flagged during development or staging, preventing them from reaching production. This approach ensures smooth communication between services, reduces downtime, and enforces better collaboration between teams.

    What is Contract Testing in a nutshell?

    Contract testing is a technique for testing an integration point by checking each application in isolation to ensure the messages it sends or receives conform to a shared understanding that is documented in a “contract”.

    Source: Pact Documentation.

    The contract is a JSON file containing the names of the two interacting systems: in this case, the web application and backend server. The contract also lists all interactions between the two systems. 

    In the context of the test automation pyramid, contract testing bridges the gap between unit tests and end-to-end tests by focusing on the interactions between microservices. It ensures that the consumer and provider services adhere to a shared agreement (the “contract”) regarding APIs, such as expected request and response structures. By having contract testing in place, it proactively identifies and addresses these compatibility problems earlier in the development cycle.

    There is an insightful diagram in the book “Contract Testing in Action” that illustrates how some test cases can be shifted to contract tests. This shift moves them lower in the test automation pyramid, enabling issues to be identified earlier in the development lifecycle. 

    Source: Contract Testing in Action book

    As microservices continue to dominate the landscape of software development, adopting contract testing is no longer optional—it is essential. By incorporating this practice, teams can build scalable, reliable, and user-focused applications, providing a smooth experience for end users and ensuring strong collaboration across development teams.

    In the upcoming articles, we will meet contract testing players and focus on the practical implementation of contract testing, exploring tools, techniques, and best practices to integrate testing strategy into development workflow.

    As I continue learning, I also began compiling a repository of helpful resources on contract testing to serve as a reference for myself and others exploring this topic.

  • Key Aspects I Consider in Automation Project Code Reviews.

    Key Aspects I Consider in Automation Project Code Reviews.

    Recently, I’ve been involved in conducting code reviews for my team’s end-to-end test automation project, which utilizes Playwright technology. I dedicate about a couple of hours each day to this task, either by reviewing others’ code or by responding to feedback on my own pull requests. 

    I firmly believe that we as test automation engineers should approach test automation as any kind of software because test automation is software development. Software developers should have solid knowledge on tools and best practices like: coding and naming standards, configuration management, code review practices, modularization, abstraction, static analysis tools, SOLID and DRY principles, etc. A well-established code review process is one of the success points while working on the test automation projects. You might find a lot of best resources on how to conduct code review: code reviews best practices by Google, by GitLab and others. In this article, I would like to point out several aspects I pay attention to while reviewing test automation code in addition to standard guidelines. 

    Automate what can be automated!

    Make your life easier 🙂 Automation can significantly simplify managing run-time errors, stylistic issues, formatting challenges, and more. Numerous tools are available to assist with this. For a Playwright project using TypeScript, I recommend installing and configure the following:

    • ESLint: This tool performs static analysis of your code to identify problems. ESLint integrates with most IDEs and can be implemented as part of your CI/CD pipeline.
    • Prettier: A code formatter that is helpful in enforcing a consistent format across your codebase.
    • Husky: Facilitates the easy implementation of Git hooks.

    In this detailed guide by Butch Mayhew you can find all the information you need to install and configure these tools in your project.

    Identify easy to spot issues first

    First thing you have to look for is any preliminary checks required for the PR to be merged, like: merge conflicts, outdated branches, failed static analysis tools or formatter checks. Then you might briefly look for easy to spot poor coding practices and errors: naming convention, redundant debug lines (for example, console.log()), formatting, long or complex functions, unnecessary comments, typos and so on. Moreover, you might spot violation of agreed rules within the team, like test case id or description, etc. 

    Verify that each test should focus on a single aspect. 

    The general guideline is that tests should contain only one assertion, reflecting the primary objective of the test. For example, if you’re verifying that a button is correctly displayed and functional on the UI, the test should be limited to that specific check.

    Here’s an example using Playwright for a TypeScript project:

    import { test, expect } from '@playwright/test'; 
    
    test('should display and enable the submit button', async ({ page }) => {        
     await page.goto('https://example.com'); 
     const submitButton = page.locator('#submit-button'); 
     await expect(submitButton).toBeVisible(); 
     await expect(submitButton).toBeEnabled(); 
    });


    Additionally, name the test to reflect its purpose, capturing the intent rather than the implementation details.

    Separation of concerns

    Separation of concerns is a fundamental design principle that we might need to stick to. When structuring code with functions and methods, it’s crucial to determine the appropriate scope for each. Ideally, a function should do one thing and one thing only. Following this approach, you will achieve a distinct and manageable codebase.

    In UI testing, the most popular approach for maintaining separation of concerns is the Page Object Pattern. This pattern separates the code that interacts with the DOM from the code that contains the test steps and assertions.

    Proper separation of concerns within tests also means placing setup and teardown steps in separate functions or methods or beforeEach or afterEach steps. This practice makes it easier to understand the core validation of the test without being distracted by the preparatory steps. Importantly, setup and teardown functions should avoid assertions; instead, they should throw exceptions if errors occur. This approach ensures that the primary focus of the test remains on its intended verification.

    Is the locator / selector strategy solid?

    A solid locator/selector strategy is crucial for ensuring that your tests are stable and maintainable. This means using selectors that are resilient to changes in the UI and are as specific as necessary to avoid false positives. It’s important to explore framework-specific best practices for locator or selector strategies. For example, Playwright best practices recommend using locators and user-facing attributes.

    To make your test framework resilient to DOM changes, avoid relying on the DOM structure directly. Instead, use locators that are resistant to DOM modifications:

    page.getByRole(‘button’, { name: ‘submit’ });

    Different frameworks may have their own guidelines for building element locating strategies, so it’s beneficial to consult the tool-specific documentation for best practices.

    Hard-coded values.

    Hard-coded values might be dangerous to automation framework flexibility and maintainability in the future. There are a few questions you might ask while reviewing: 

    1. Can we use data models to verify data types at runtime? Consider implementing data models to validate data types during execution, ensuring robustness and reducing errors.
    2. Should this variable be a shared constant? Evaluate if the value is used in multiple places and would benefit from being defined as a constant for easier maintenance.
    3. Should we pass this parameter as an environment variable or external input? This approach can significantly improve configurability and adaptability.
    4. Can we extract this value directly from the API interface? Investigate if the value can be dynamically retrieved from the API, reducing the need for hard-coding and improving reliability.

    Is the code properly abstracted and structured?

    As test automation code tends to grow rapidly, it is important to ensure that common code is properly abstracted and reusable by other tests. Data structures, page objects and API utilities should be separated and organized in the right way. 

    But don’t overuse abstraction and tolerate little duplication in favour of readability.

    Code Comments

    Code comments should not duplicate information the code can provide. Comments should provide context and rationale that the code alone cannot. Additionally, functions and classes should follow a self-explanatory naming convention, making their purpose clear without needing additional comments.

    “Trust, but verify.”

    Don’t rely on an automated test until you’ve seen it fail. If you can’t modify the test to produce a failure, it might not be testing what you intend. Additionally, be wary of unstable test cases that intermittently pass or fail. Such tests need to be improved, fixed, or removed altogether to ensure reliability.

    Communication is the key.

    Navigating the human aspects of code reviews can be as challenging as the technical ones. Here are some strategies that have worked for me when reviewing code.

    1. I often engage with the code by asking clarifying questions. For example:
    • “How does this method work?”
    • “If this requirement changes, what else would need to be updated?”
    • “How could we make this more maintainable?”
    1. Praise the good! Notice when people did something well and praise them for it. Positive feedback from peers is highly motivating. 
    2. Focus on the code, not the person. It’s important to frame discussions around the code itself rather than the person who wrote it. This helps reduce defensiveness and keeps the focus on improving the code quality.
    3. Discuss detailed points in-person. Sometimes, a significant change is easier to discuss face-to-face rather than in written comments. If a discussion is becoming lengthy or complex, I’ll often suggest continuing it in person.
    4. Explain your reasoning. When suggesting changes, it’s helpful to explain why you think the change is necessary and ask if there might be a better alternative. Providing context can prevent suggestions from seeming nit-picky.

    Conclusion

    This is not an exhaustive list of considerations for code reviews. For more guidance, I recommend checking out articles by Andrew Knight and Angie Jones. Their insights can provide additional strategies to enhance your code review process.

  • Part 2. How to approach API testing?

    Part 2. How to approach API testing?

    🤨 What is API Testing?

    API testing is important for validating the functionality of the API and ensuring that it meets the functional requirements. It is critical for integration testing since APIs are used to communicate between different software systems. API testing helps to identify issues early in the development cycle and prevents costly bugs and errors in production. This process is designed to not only test the API’s functionality — but also its reliability, performance, and security.

    🧪 Why should you care about API Testing?

    You can find bugs earlier and save money

    Testing REST requests means you can find bugs earlier in the development process, sometimes even before the UI has been created!

    FACT:
    According to the Systems Sciences Institute at IBM, the cost to fix a bug found during implementation is about six times higher than one identified during design. The cost to fix an error found after product release is then four to five times as much as one uncovered during design, and up to 100 times more than one identified during the maintenance phase. In other words, the cost of a bug grows exponentially as the software progresses through the SDLC.
    Relative Cost of Fixing Defects

    You can find flaws before they are exploited

    Malicious users know how to make REST request and can use them to exploit security flaws in your application by making requests the UI doesn’t allow; you’ll want to find and fix these flaws before they are exploited

    It is easy to automate

    Automation scripts run much faster than UI Automation

    ❌ Everything could go wrong!

    When working with API, there set of risks and potential bugs that you might avoid to ensure the reliability and security of the application (not limited list of risks):

    ⚠️ Risk#1. We could extract personal / private information without proper authentication. It could lead to unauthorized access problems and data breaches.

    🐞 Bugs: Missing or misconfigured authentication tokens, incorrect permission settings, or bypassing authorization checks.


    ⚠️ Risk#2. When a user sends wrong data in the wrong format, it could break the system with 500 errors.


    ⚠️ Risk#3. Improper input validation can lead to security vulnerabilities like SQL-injection cross-site scripting.

    🐞 Bugs: not validation request parameters, not handling unexpected data formats properly


    ⚠️ Risk#4. Insecure data transmission. Transmitting data using unencrypted channels could lead to exposing sensitive information or interception.

    🐞 Bugs: Not using HTTPS, ignoring SSL certification


    ⚠️ Risk#5. Poor error handling may lead to exposing sensitive information or make difficult diagnosing of issues

    🐞 Bugs: returning too details error messages which are revealing implementation details or which are not providing necessary information to the user


    ⚠️ Risk#6. Performance issues. API doesn’t handle loads efficiently which can lead to performance degradation or outages.

    🐞 Bugs: memory leaks, inefficient database queries, not optimized API response times.


    This schema illustrates the types of questions that a tester can pose to ensure comprehensive API testing. This list is not limited.

    Questions that a tester can pose to ensure comprehensive API testing

    💡 Let’s take a look at the API Testing in more detail

    Introduced by Mike Cohn in his book Succeeding with Agile (2009), the pyramid is a metaphor for thinking about testing in software.

    The testing pyramid is a concept in software testing that represents the ideal distribution of different types of tests in a software development process.

    Source: https://semaphoreci.com/blog/testing-pyramid

    It emphasises having a larger number of lower-level tests and a smaller number of higher-level tests. The testing pyramid is a way to ensure a balanced and effective testing strategy.

    I adjusted this pyramid to API Testing and what I’ve got:

    API Testing Pyramid

    Unit Testing

    Unit tests, unit tests and unit tests once more. Everybody knows the benefits of unit tests: we should be able to identify any problems with the current components of APIs as soon as possible. The higher unit tests coverage, the better for you and your product.

    Contract Testing

    Assert that the specs have not changed. This type of testing is used to test the contracts or agreements established between various software modules, components, or services that communicate with each other via APIs. These contracts specify the expected inputs, outputs, data formats, error handling, and behaviours of the APIs.

    JSON-schema is a contract that defines the expected data, types and formats of each field in the response and is used to verify the response.

    Example of JSON Schema

    Official Documentation: https://json-schema.org/

    Functional Testing

    The purpose of functional testing is to ensure that you can send a request and get back the anticipated response along with status. That includes positive and negative testing. Make sure to cover all of the possible data combinations.

    Test Scenario categories:

    • – Happy Path (Positive test cases) checks basic information and if the main functionality met
    • – Positive test cases with optional parameters. With these test cases it is possible to extend positive test cases and include more extra checks
    • – Negative cases. Here we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null)
    • – Authorization, permission tests

    How to start with Functional Testing?

    1. Read API documentation / specification / requirements carefully to understand its endpoints, request methods, authentication methods, status codes and expected responses.
    2. Based on the functionality you are going to test, outline positive and negative test scenarios which cover use cases and some edge cases as well. Revisit Functional API Testing section for more details.
    3. Setup test environment: create a dedicated test environment that mirrors the production environment.
    4. Select an appropriate tool (for example, Postman, Insomnia), frameworks (for example, pytest, JUnit, Mocha), technologies, programming languages (Python, Javascript, Java, etc.) with appropriate libraries for API Testing.
    5. Plan Test Data: It is always important to populate the environment with the appropriate data.
    6. Write Automation scripts: Automate repetitive test cases, like smoke, regression suites to ensure efficient and consistent testing. Validate responses against expected outcomes and assertions, checking for proper status codes, headers, and data content.
    7. Test the API’s error-handling mechanisms: Verify that the API responds appropriately with clear error messages and correct status codes.
    8. Document Test Results: Maintain detailed documentation of test cases, expected outcomes, actual results to make onboarding of new team members easier.
    9. Collaborate with developers: it is important to have consistent catch-ups with your team and stakeholders to review test results and address any identified issues.
    10. Continuous Improvement: Continuously refine and improve your testing process based on lessons learned from previous test cycles.
    11. Feedback Loop: Provide feedback to the development team regarding the API’s usability, performance, and any issues encountered during testing.

    Non-Functional

    Non-functional API testing is where the testers check the non-functional aspects of an application, like its performance, security, usability, and reliability. Simply put, the functional test focuses on whether API works, whereas non-functional tests focus on how well API works.

    End-to-end testing

    In general, end-to-end testing is the process of testing a piece of software from start to finish. We are checking it by mimicking user actions. If it comes to API, it is crucial to check if APIs can communicate properly by making call like a real client.

    Exploratory testing

    Source: Google Images

    You’re not done testing until you’ve checked that the software meets expectations and you’ve explored whether there are additional risks. A comprehensive test strategy incorporates both approaches.

    Elisabeth Hendrickson, book “Explore It!”

    When all automation and scripted testing is performed, it is time to examine an API, interact and observe its behavior. This is a great way to learn and explore edge cases to uncover issues that automated or scripted testing would have missed.

    There are two ways of doing it:

    1. When test engineer performs it individually, He/She needs to apply domain knowledge intuition, critical thinking and user-centric thinking.
    2. There is another way — pair testing which involves two people: driver and navigator. It is a time-boxed testing when the driver performs the actual testing while the navigator observes, provides guidance, and takes notes where necessary. This approach maximizes a level of creativity and encourages knowledge sharing and better collaboration between team members.

    More information: https://www.agileconnection.com/article/two-sides-software-testing-checking-and-exploring

    Book “Explore It!”: https://learning.oreilly.com/library/view/explore-it/9781941222584/

    BONUS:

    Health Check API: transition to the cloud and refactoring of the applications to microservices introduced new challenges in effective monitoring these microservices at scale. To standardise the process of validating the status of a service and its dependencies, it becomes helpful to introduce a health check API endpoint to a RESTful (micro) service. As part of the returned service status, a health check API can also include performance information, such as component execution times or downstream service connection times. Depending on the state of the dependencies, an appropriate HTTP return code and JSON object are returned.

    🎬 Conclusion

    In conclusion, mastering the art of API testing requires a good approach that includes strategic planning, and continuous improvement.

    Remember, API testing is not a one-time effort, but an ongoing process that evolves alongside your software development lifecycle. Continuous improvement is key to refining your API testing strategy. Regularly review and update your test cases, incorporating changes due to new features, bug fixes, or code refactoring. Learn from exploratory testing output, identify areas of improvement by listening to your customer’s and team’s feedback.