Category: Uncategorized

  • Assist testing with AI capabilities.

    Assist testing with AI capabilities.

    Introduction

    Just a decade ago, leveraging the power of AI models required significant investments of time, resources, and expertise. Developing and deploying AI models necessitated extensive training and dedicated infrastructure, often requiring businesses to hire specialized experts for development and maintenance. This process was cumbersome and inaccessible to many businesses. However, with the latest advancements of Large Language Models (LLMs), the landscape has dramatically shifted. And now we are starting to benefit from what is commonly referred to as the “Democratization of AI.”

    Democratization of artificial intelligence means making AI available for all. In other words, open-source datasets and tools developed by companies like Microsoft and Google – which demand less knowledge of AI from the user – are made available so that anyone can build innovative AI software. This has led to the rise of ‘citizen data scientists’.

    The Ultimate Guide to Democratization in Artificial Intelligence

    Therefore, human resources personnel and support can leverage AI capabilities to compile comprehensive responses in a few minutes. While social media professionals can generate engaging announcements with help of a couple of simple prompts. Testing and development are not an exception. Testing, a critical aspect of product quality assurance, benefits immensely from AI-powered tools like GenAI. What sets GenAI apart is its ability to summarize, analyze, and generate information in a manner that enhances testing efficiency and effectiveness. Testers can leverage LLMs to accelerate testing procedures, conduct more thorough assessments, and ensure continuous improvement in product quality. 

    What are the Large Language Models?

    How can individuals with limited experience in building and utilizing AI best approach understanding its principles and practical applications? Luckily, there is a Computerphile video “AI Language Models & Transformers” explaining fundamental principles on how LLM works:

    In this video, Rob Miles illustrates the concept by using an example of typing on a smartphone keyboard. As you type, the keyboard suggests words based on the beginning of the sentence, updating its suggestions as you select options. This simple analogy mirrors how LLMs operate, by leveraging the probability to predict the next word based on extensive training on vast datasets. 

    If you’d like to learn more about LLM and how it’s been trained from nutshell, check out this article by Tim Lee, a journalist with a master’s degree in computer science, and Sean Trott, a cognitive scientist at the University of California, San Diego: Large language models, explained with a minimum of math and jargon

    Given that LLMs operate on probabilities, achieving the desired outcomes often requires adjusting our communication methods which may differ from normal human interaction. This is where prompt engineering comes into play. It contains a pile of pattern collections with the techniques used to execute against models. While I won’t delve deeply into this topic in this article, I do want to highlight a recent template developed by Dimitar Dimitrov. This resource, accessible at LLM Prompting, can be particularly valuable for beginners looking to construct prompts that extract optimal results.

    What LLMs can do?

    • Generative Capabilities

    Generative AI refers to the ability to produce original natural language output. Large Language Models (LLMs) advance at generating new content based on their models and provided prompts. However, it’s essential to understand that the generation process relies on probabilistic models. Additionally, LLMs may lack context and specificity regarding specific features or products. Therefore, providing adequate information and instructions for data output is crucial.

    • Transformation Capabilities

    Leveraging advanced algorithms, LLMs can efficiently convert data structures from one form to another. For example, they demonstrate proficiency in transitioning between tools such as Selenium to Cypress or Selenium to Playwright, as well as facilitating the conversion of code from Python to Javascript.

    • Enhancing Capabilities

    LLMs enable us to enhance and enrich existing information through various means. In April 2023 Similarweb, a market competition analysis company, reported that Stack Overflow’s traffic in the preceding month had dropped by 14%. CoPilot utilizes the same LLM model as ChatGPT, proficient in interpreting and generating human and programming languages. So, with a plugin integrated into VSCode developers can delegate the implementation of entire functions to CoPilot instead of searching for them on Stack Overflow. Source: Stack Overflow is ChatGPT Casualty: Traffic Down 14% in March.

    Moreover, ChatGPT becomes a thoughtful pairing with an advanced version of “rubber duck” starting from analyzing ideas, to analyzing code and solving problems related to code. 

    How can we leverage AI in testing?

    • Formulate test ideas

    Risk Identification and Test Idea Generation: Relying only on LLM-generated output to define testing decisions should be avoided. Instead, LLMs can serve as valuable tools for suggesting test ideas and identifying potential risks. These suggestions can then be used as starting points for further exploration or integrated into existing testing frameworks.

    Broadened Analysis: LLMs contribute to expanding analysis endeavors such as risk assessment and shift-left testing. By feeding them existing analysis data, LLMs can offer insights and suggest new ideas for incorporation into our analysis frameworks, enriching the overall assessment process.

    • Test Cases Implementation

    Code Snippets: While expecting LLMs to generate complete automated tests or frameworks may yield limited value, leveraging them to generate smaller components such as code snippets can be highly advantageous. These snippets can support testing activities like exploratory testing, enhancing efficiency and effectiveness.

    Code Conversion: LLMs advanced in converting functions, classes, and other code components into various iterations. Their value lies in their capacity to retain the logic and flow of the original code while translating it into different languages.

    Descriptive Annotations: Similar to code review, LLMs assist in enhancing code descriptiveness, enabling the rapid creation and maintenance of code comments. This proves invaluable in automated testing scenarios where clear communication of automation logic is vital for maintenance purposes.

    Examples:

    1. ZeroStep https://github.com/zerostep-ai/zerostep: makes it easier to write test cases with Playwright. 
    2. Postbot – AI-powered Postman Assistant: https://beththetester.wordpress.com/2023/06/12/5-ways-postmans-ai-postbot-can-help-your-testing/ 
    3. Visual testing with Applitools: https://applitools.com/ 
    4. CoPilot: https://copilot.microsoft.com/ 
    • Generate test data and prepare test environments

    Test Data Generation: LLMs, when equipped with explicit rules, can easily generate sets of data suitable for a variety of testing purposes.

    Data Transformation: Leveraging LLMs for data transformation improves testing processes significantly. For instance, LLMs can flawlessly convert plain-text test data into SQL statements or translate SQL statements into helper functions utilized in test automation.

    • Report Generation and Issues Reporting:

    Summarizing Notes: Although not a direct data conversion, LLMs can simultaneously transform and summarize information. They can extract raw testing notes from activities like exploratory or shift-left testing sessions and compile a summary for the development or management team. 

    • Test Maintenance:

    Automated Test Maintenance: AI-driven automation frameworks can monitor test execution results and automatically update test cases or scripts based on changes in application behavior or requirements. This helps ensure that tests remain relevant and effective as the software evolves over time.

    Examples:

    1. Testim.io: a cloud-based platform that empowers testers with efficient test case authoring, maintenance, and execution without the need for extensive coding expertise. It allows better test cases categorization. One of Testim.io’s most significant advantages is its embedded self-healing mechanism. 

    Numerous companies (including Google, Facebook and Microsoft) are already leveraging LLM to speed up and improve their automated testing procedures. I recently came across an article highlighting real-world examples that caught my attention: Enhancing Test Coverage with AI: Unleashing the Power of Automated Test Generation.

    Trust, but verify

    Russian Proverb

    While LLMs hold significant potential, it’s crucial not to blindly rely on their abilities. LLMs operate based on probabilities, which differ from human reasoning, underscoring the importance of skepticism in evaluating their outputs. Given the fact that LLM’s hallucination can be very convincing, blindly trusting LLMs can easily compromise the quality of testing. Thus, it’s essential to remember that humans, not LLMs, are ultimately responsible for problem-solving, critical thinking, and taking decisions effortlessly. 

    AI + Humans

    And in conclusion, in one of the latest episodes of TestGuild featuring Tariq King, Chief Executive Officer and Head of Test IO, a profound insight was shared:

    Tariq emphasized the importance of bringing humans in the loop to ensure AI systems remain aligned with their intended objectives, thereby preventing potential harm and mitigating bias.

    “AI should be something that we see as good, it helps us grow, it helps us automate and become more efficient and so on and so forth. The only way that you can actually make sure that AI serves that purpose for humans is to have humans in the loop throughout the process. Meaning, humans involved in AI development, whether it could be curation test data, whether that be mitigating unwanted bias .. You need humans in the loop to review and make sure that these systems are not deviating away from something that would be very useful into something that’s either not useful or even potentially harmful.”

    • Tariq King, Chief Executive Officer and Head of Test IO

    Resources:

    1. AI-Assisted Testing by Mark Winteringham https://www.manning.com/books/ai-assisted-testing
    2. GenAI for Testers Course: https://www.thetesttribe.com/courses/generative-ai-software-testing/
    3. Prompt Engineering Guide: https://www.promptingguide.ai/
    4. ChatGPT Prompt Engineering for Developers: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
  • Enabling debugging functionalities in Playwright tests for VSCode.

    Enabling debugging functionalities in Playwright tests for VSCode.

    Playwright provides bunch of powerful features for debugging! And one of them is verbose logging. According to the Playwright documentation, by running the command:

    DEBUG=pw:api npx playwright test

    you can get detailed overview of what is happening behind the scenes.

    If you make a step further and install Playwright Extension, which will give you the whole spectrum of opportunities for effective test development, like: running tests with a single click, easier configuration, codegen capabilities, etc.

    While utilising all these awesome capabilities, you might miss verbose logging in test output.

    How to put all these nice capabilities (leveraging Playwright Extension features and verbose logging) together? There is a way: let’s add one line of code in VSCode Playwright Extension configuration file.

    Steps to achieve it:

    1. In your VSCode IDE navigate to Extensions
    2. Find Playwright Extension and click on gear icon. Navigate to extension settings.
    3. Click on Edit in settings.json
    1. Add one line of configuration: "DEBUG": "pw:api"COPY "playwright.env": { "DEBUG": "pw:api" }
    2. Save setting.json file and close it.
    3. Run test cases again in a Testing tab and check
    4. Check Test Output and voilà!

  • Part 2. How to approach API testing?

    Part 2. How to approach API testing?

    🤨 What is API Testing?

    API testing is important for validating the functionality of the API and ensuring that it meets the functional requirements. It is critical for integration testing since APIs are used to communicate between different software systems. API testing helps to identify issues early in the development cycle and prevents costly bugs and errors in production. This process is designed to not only test the API’s functionality — but also its reliability, performance, and security.

    🧪 Why should you care about API Testing?

    You can find bugs earlier and save money

    Testing REST requests means you can find bugs earlier in the development process, sometimes even before the UI has been created!

    FACT:
    According to the Systems Sciences Institute at IBM, the cost to fix a bug found during implementation is about six times higher than one identified during design. The cost to fix an error found after product release is then four to five times as much as one uncovered during design, and up to 100 times more than one identified during the maintenance phase. In other words, the cost of a bug grows exponentially as the software progresses through the SDLC.
    Relative Cost of Fixing Defects

    You can find flaws before they are exploited

    Malicious users know how to make REST request and can use them to exploit security flaws in your application by making requests the UI doesn’t allow; you’ll want to find and fix these flaws before they are exploited

    It is easy to automate

    Automation scripts run much faster than UI Automation

    ❌ Everything could go wrong!

    When working with API, there set of risks and potential bugs that you might avoid to ensure the reliability and security of the application (not limited list of risks):

    ⚠️ Risk#1. We could extract personal / private information without proper authentication. It could lead to unauthorized access problems and data breaches.

    🐞 Bugs: Missing or misconfigured authentication tokens, incorrect permission settings, or bypassing authorization checks.


    ⚠️ Risk#2. When a user sends wrong data in the wrong format, it could break the system with 500 errors.


    ⚠️ Risk#3. Improper input validation can lead to security vulnerabilities like SQL-injection cross-site scripting.

    🐞 Bugs: not validation request parameters, not handling unexpected data formats properly


    ⚠️ Risk#4. Insecure data transmission. Transmitting data using unencrypted channels could lead to exposing sensitive information or interception.

    🐞 Bugs: Not using HTTPS, ignoring SSL certification


    ⚠️ Risk#5. Poor error handling may lead to exposing sensitive information or make difficult diagnosing of issues

    🐞 Bugs: returning too details error messages which are revealing implementation details or which are not providing necessary information to the user


    ⚠️ Risk#6. Performance issues. API doesn’t handle loads efficiently which can lead to performance degradation or outages.

    🐞 Bugs: memory leaks, inefficient database queries, not optimized API response times.


    This schema illustrates the types of questions that a tester can pose to ensure comprehensive API testing. This list is not limited.

    Questions that a tester can pose to ensure comprehensive API testing

    💡 Let’s take a look at the API Testing in more detail

    Introduced by Mike Cohn in his book Succeeding with Agile (2009), the pyramid is a metaphor for thinking about testing in software.

    The testing pyramid is a concept in software testing that represents the ideal distribution of different types of tests in a software development process.

    Source: https://semaphoreci.com/blog/testing-pyramid

    It emphasises having a larger number of lower-level tests and a smaller number of higher-level tests. The testing pyramid is a way to ensure a balanced and effective testing strategy.

    I adjusted this pyramid to API Testing and what I’ve got:

    API Testing Pyramid

    Unit Testing

    Unit tests, unit tests and unit tests once more. Everybody knows the benefits of unit tests: we should be able to identify any problems with the current components of APIs as soon as possible. The higher unit tests coverage, the better for you and your product.

    Contract Testing

    Assert that the specs have not changed. This type of testing is used to test the contracts or agreements established between various software modules, components, or services that communicate with each other via APIs. These contracts specify the expected inputs, outputs, data formats, error handling, and behaviours of the APIs.

    JSON-schema is a contract that defines the expected data, types and formats of each field in the response and is used to verify the response.

    Example of JSON Schema

    Official Documentation: https://json-schema.org/

    Functional Testing

    The purpose of functional testing is to ensure that you can send a request and get back the anticipated response along with status. That includes positive and negative testing. Make sure to cover all of the possible data combinations.

    Test Scenario categories:

    • – Happy Path (Positive test cases) checks basic information and if the main functionality met
    • – Positive test cases with optional parameters. With these test cases it is possible to extend positive test cases and include more extra checks
    • – Negative cases. Here we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null)
    • – Authorization, permission tests

    How to start with Functional Testing?

    1. Read API documentation / specification / requirements carefully to understand its endpoints, request methods, authentication methods, status codes and expected responses.
    2. Based on the functionality you are going to test, outline positive and negative test scenarios which cover use cases and some edge cases as well. Revisit Functional API Testing section for more details.
    3. Setup test environment: create a dedicated test environment that mirrors the production environment.
    4. Select an appropriate tool (for example, Postman, Insomnia), frameworks (for example, pytest, JUnit, Mocha), technologies, programming languages (Python, Javascript, Java, etc.) with appropriate libraries for API Testing.
    5. Plan Test Data: It is always important to populate the environment with the appropriate data.
    6. Write Automation scripts: Automate repetitive test cases, like smoke, regression suites to ensure efficient and consistent testing. Validate responses against expected outcomes and assertions, checking for proper status codes, headers, and data content.
    7. Test the API’s error-handling mechanisms: Verify that the API responds appropriately with clear error messages and correct status codes.
    8. Document Test Results: Maintain detailed documentation of test cases, expected outcomes, actual results to make onboarding of new team members easier.
    9. Collaborate with developers: it is important to have consistent catch-ups with your team and stakeholders to review test results and address any identified issues.
    10. Continuous Improvement: Continuously refine and improve your testing process based on lessons learned from previous test cycles.
    11. Feedback Loop: Provide feedback to the development team regarding the API’s usability, performance, and any issues encountered during testing.

    Non-Functional

    Non-functional API testing is where the testers check the non-functional aspects of an application, like its performance, security, usability, and reliability. Simply put, the functional test focuses on whether API works, whereas non-functional tests focus on how well API works.

    End-to-end testing

    In general, end-to-end testing is the process of testing a piece of software from start to finish. We are checking it by mimicking user actions. If it comes to API, it is crucial to check if APIs can communicate properly by making call like a real client.

    Exploratory testing

    Source: Google Images

    You’re not done testing until you’ve checked that the software meets expectations and you’ve explored whether there are additional risks. A comprehensive test strategy incorporates both approaches.

    Elisabeth Hendrickson, book “Explore It!”

    When all automation and scripted testing is performed, it is time to examine an API, interact and observe its behavior. This is a great way to learn and explore edge cases to uncover issues that automated or scripted testing would have missed.

    There are two ways of doing it:

    1. When test engineer performs it individually, He/She needs to apply domain knowledge intuition, critical thinking and user-centric thinking.
    2. There is another way — pair testing which involves two people: driver and navigator. It is a time-boxed testing when the driver performs the actual testing while the navigator observes, provides guidance, and takes notes where necessary. This approach maximizes a level of creativity and encourages knowledge sharing and better collaboration between team members.

    More information: https://www.agileconnection.com/article/two-sides-software-testing-checking-and-exploring

    Book “Explore It!”: https://learning.oreilly.com/library/view/explore-it/9781941222584/

    BONUS:

    Health Check API: transition to the cloud and refactoring of the applications to microservices introduced new challenges in effective monitoring these microservices at scale. To standardise the process of validating the status of a service and its dependencies, it becomes helpful to introduce a health check API endpoint to a RESTful (micro) service. As part of the returned service status, a health check API can also include performance information, such as component execution times or downstream service connection times. Depending on the state of the dependencies, an appropriate HTTP return code and JSON object are returned.

    🎬 Conclusion

    In conclusion, mastering the art of API testing requires a good approach that includes strategic planning, and continuous improvement.

    Remember, API testing is not a one-time effort, but an ongoing process that evolves alongside your software development lifecycle. Continuous improvement is key to refining your API testing strategy. Regularly review and update your test cases, incorporating changes due to new features, bug fixes, or code refactoring. Learn from exploratory testing output, identify areas of improvement by listening to your customer’s and team’s feedback.

  • Part 1: API explained

    Part 1: API explained

    This is the first part of series about API Testing. I am going to start with general concepts, I will talk about fundamental concept of APIs, tracing their historical roots, exploring various types of protocols they employ, and understanding the essential components that constitute an API.

    A Historical Perspective

    Back in the 1950s, an API was understood as a potential method to facilitate communication between two computers. This term was first mentioned in a 1951 book written by Maurice Wilkes and David Wheeler called ‘The Preparation of Programs for an Electronic Digital Computer’. It outlined several key computing terms, including the first API. At this stage, an API was starting to exist, but they were limited to simple, command-line interfaces that enabled programmers to interact with computers.

    The Preparation of Programs for an Electronic Digital Computer, Maurice Wilkes and David Wheeler

    Blog: https://blog.postman.com/intro-to-apis-history-of-apis/

    What is API?

    API is an Application Programming Interface. API is a set of routines, protocols and tools for building Software Application.

    How API evolved throughout the time?

    The diagram illustrates the API timeline and API styles comparison. Source: https://blog.bytebytego.com/p/soap-vs-rest-vs-graphql-vs-rpc

    Throughout the time the internet has changed and evolved, applications and APIs evolved along with it. Many years ago APIs were built with strict rules to allow the two sides of the interface talking to each other. Over time, different API protocols have been released, each of them has its own pattern of standardizing data exchange.

    1. SOAP is an XML-formatted, highly standardized web communication protocol. Released by Microsoft in the 1990s. XML data format drags behind a lot of formality. Paired with the massive message structure, it makes SOAP the most verbose API style.
    2. In the early 2000s, the web started to shift towards a more consumer-based place. Some e-commerce sites, such as eBay and Amazon, started using APIs, which are more public and flexible. Twitter, Facebook and others joined them as well in using REST APIs. This API style was originally described in 2000 by Roy Fielding in his doctoral dissertation. REST is the Representational State Transfer Protocol. REST makes server-side data available representing it in simple formats, often JSON. It is the most commonly used protocol nowadays.
    3. The internet continued to change, mobile applications were becoming popular. Companies faced challenges with the amount of data they wanted to transfer on mobile devices. So, Facebook created GraphQL. This query language helps to reduce the amount of data that gets transferred while introducing a slightly more rigid structure to the API.
    4. 4. gRPC was developed by Google for implementing distributed software systems that need to run fast on a massive scale. Initially, it was not standardized to be used as a generic framework as it was closely tied to Google’s internal infrastructure. In 2015 Google liberalized it as open source and standardized it for community use, under the name gRPC. During the first year of its launch, it was adopted by large companies such as Netflix, Docker or Cisco among others.

    REST API vs. SOAP vs. GraphQL vs. gRPC by Alex Xu: https://www.altexsoft.com/blog/soap-vs-rest-vs-graphql-vs-rpc/

    More about REST API: https://blog.bytebytego.com/p/the-foundation-of-rest-api-http

    What is RESTful API so popular: https://blog.bytebytego.com/p/why-is-restful-api-so-popular

    API in more detail

    The working principle of API is commonly expressed through the request-response communication between a client and a server. In a web API, a client is on one side of the interface and sends requests, while a server (or servers) is on the other side of the interface and responds to the request.

    Since the REST API is the most popular, we are going to talk about it in detail.

    These are the general steps for any REST API call:

    1. Client sends a request to the server. The client follows the API documentation to format the request in a way that the server understands.
    2. The server authenticates the client and confirms that the client has the right to make that request.
    3. The server receives the request and processes it internally.
    4. The server returns a response to the client. The response contains information that tells the client whether the request was successful. The response also includes any information that the client requested.

    1 — HTTP Methods

    Request methods are the actions that the client wants to perform on the server resource. The most common methods are GET, POST, PUT, DELETE, others are: UPDATE, HEAD, CONNECT, OPTIONS, TRACE, PATCH.

    • GET: retrieves the information from the server
    • POST: used to add a new object to the server resource.
    • PUT: used to update the existing object on the server resource.
    • DELETE: used to delete the object on the server resource.

    More information: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods

    2 — HTTP Headers

    HTTP headers play a crucial role in how clients and servers send and receive data. They provide a structured way for these entities to communicate important metadata about the request or response. This metadata can contain various information like the type of data being sent, its length, how it’s compressed, and more.

    Headers are logically grouped into three categories: request headers, response headers and general header. This can be seen in the network tab of the browser after sending the request.

    Request headers are:

    1. Authorization: request header can be used to provide credentials that authenticate a user agent with a server, allowing access to a protected resource.
    2. Host: this is the domain name of the server
    3. Accept-Language: request HTTP header indicates the natural language and locale that the client prefers.
    4. Accept-Encoding: request HTTP header indicates the content encoding (usually a compression algorithm) that the client can understand.
    5. Content-Type: this field tells the client the format of the data it’s receiving

    Response headers:

    1. Expires: this header contains the date/time after which the response is considered expired.
    2. Content-Length: this field in the request or response header plays a crucial role in data transfer. It specifically indicates the size of the body of the request or response in bytes. This helps the receiver understand when the current message ends and potentially prepare for the next one, especially in cases where multiple HTTP messages are being sent over the same connection.
    3. Content-Type: this field tells the client the format of the data it’s receiving
    4. Cache-Control: HTTP header field holds directives (instructions) — in both requests and responses — that control caching in browsers and shared caches (e.g. Proxies, CDNs)
    5. Date: HTTP header contains the date and time at which the message originated.
    6. Keep-Alive: general header allows the sender to hint about how the connection may be used to set a timeout and a maximum amount of request

    General Headers:

    1. Request URL
    2. Request Method
    3. Status Code
    4. Remote Address
    5. Connection

    More information: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers

    3 — Request/Response Payload

    Request Body has a format to be followed, which is understood by the server resource or the service endpoint. Usually the response body is in JSON.

    What is JSON?

    JSON (JavaScript Object Notation) is an open-standard file format or data interchange format that uses human-readable text to transmit data objects.

    A JSON object contains data in the form of a key/value pair. The keys are strings and the values are the JSON types. Keys and values are separated by a colon. Each entry (key/value pair) is separated by a comma. The { (curly brace) represents the JSON object. An example of JSON is provided below.

    {
    "First name": "John",
    "Age": 22,
    "isMaried": false,
    "Hobbies": [
    "Netflix",
    "mountain biking"
    ]
    }

    4 — URL

    A REST API is accessed with a URL. The URL consists of a base URL, resource, path variables and query parameters. The base URL is the internet host name for the REST API. Resources are presented as sets of endpoints grouped on the basis of related data or the object they allow working with.

    Difference between query parameters and path variables:

    The difference between path variables and query parameters

    5 — HTTP Status Codes

    The REST responses includes a status code that indicates whether the request was successful, and if not, the type of the error that occurred.

    Response codes are grouped in various classes based on the characteristics of the response. The most common groupings are as follows (with several examples)

    1. Informational — 1XX

    100 — Continue: this interim response indicates that the client should continue the request or ignore the response if the request is already finished.

    2. Success — 2XX

    200 — OK: the request was successful

    201 — Created: the request was successful , and one or more entities was created

    204 No Content — request was processed successfully and no data returned

    3. Redirection — 3XX

    301 Moved Permanently — this response should include the new URI in the header so that the client will know where to point the request next time

    304 — Not modified

    4. Client Error — 4XX

    400 — Bad Request: the request was not properly formed therefore was not successful

    404 — Not Found: the URI path is incorrect, or communication with the server was unsuccessful

    403 Forbidden — the client has the appropriate authentication to make the request but doesn’t have the permission to view the resource

    5. Server Error — 5XX

    503 — Service Unavailable: the responding server is temporarily down for some reason.

    More information: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status

    Conclusion

    We began by diving into the fundamental concepts, grasping the essence of Application Programming Interfaces.

    Tracing their historical roots, we witnessed the evolutionary growth of APIs. From the early days of monolithic architectures to the rise of microservices, APIs have proven to be the backbone of seamless communication between various software components.

    Furthermore, we explored the diverse types of protocols employed by APIs, including REST, SOAP, GraphQL, and more. Each protocol brings its unique strengths, ensuring that developers have the flexibility to choose the most suitable option for their projects.

    Understanding the essential components of an API, such as endpoints, methods, headers, and payloads, has given us a deeper appreciation for the intricacies involved in API design and usage. These components act as the building blocks that facilitate data exchange, functionality integration, and ultimately, the seamless flow of information between different applications.

    In the upcoming parts of this series, we will take a look at the world of API Testing. We will explore the best practices for testing APIs, the tools and frameworks available, and various testing methodologies to ensure the robustness, security, and efficiency of APIs.