Ticker

6/recent/ticker-posts

Manual Testing Fundamentals: Types and Techniques

Manual Testing Fundamentals: Types and Techniques

Manual testing is a type of software testing where testers manually execute test cases without using any automation tools. It involves human intervention to verify the functionality, usability, and performance of an application. Manual testing is essential for identifying bugs, ensuring the software meets requirements, and providing a user-friendly experience. Below are the types of manual testing:


1. Functional Testing

Functional testing focuses on verifying that the software functions as expected according to the specified requirements. Testers validate each feature and ensure it works correctly.

Examples:
Unit Testing: Testing individual components or modules. This the first level of functional testing which is generally performed by developer team member. 

Integration Testing: Integration Testing is the process of testing how different parts or modules of a software system work together when combined. After each module is developed and tested individually (called unit testing), integration testing checks if they interact correctly when put together. The goal is to find any issues that happen when these parts are integrated, like data not passing properly between them.

System Testing: System Testing is the process of testing the entire software system as a whole to make sure everything works together as expected. It checks if all parts of the system function correctly when combined and meets the requirements.

Smoke Testing:This is a quick, initial test done to see if the basic features of a software application are working. It helps to identify if there are any major issues before doing more detailed testing. It's like checking if the system "lights up" and runs at a basic level.

Sanity Testing: This Testing type is a quick check to make sure that recent changes, like bug fixes or new features, haven't caused any problems with the existing functionality of the software. It's done to confirm that the changes work as expected without affecting other parts of the system.

Regression Testing: Regression Testing is the process of re-testing the software after making changes, like adding new features or fixing bugs, to ensure that those changes haven’t caused any new issues or broken existing functionality. It helps confirm that the software still works as it did before the changes.


Let's Understand the all above testing type with the scenario of below Online Shooping Website.

Imagine you're developing an online shopping website. Unit testing focuses on testing individual features like the "total price calculation" function, ensuring it works correctly. Once this function is tested, integration testing checks how it interacts with other features, such as ensuring that when a product is added to the cart, the total price updates correctly. Before diving deeper, smoke testing quickly checks if the most basic features, like logging in and adding items to the cart, are working at all. After fixing bugs, like a broken "add to cart" button, sanity testing ensures that the fix didn’t break any other essential functions, like the checkout process. After adding new features, like a "wishlist," regression testing is performed to confirm that nothing else, like the cart or checkout, has been broken by the new changes. Finally, system testing tests the entire website, ensuring that all the parts work together as expected, from browsing products to completing a purchase. Each of these testing types ensures the website is functional, reliable, and free of bugs throughout development.

2. Non-Functional Testing

Non-functional testing focuses on verifying the non-functional aspects of a software application, such as how it performs under various conditions, how user-friendly it is, its compatibility with different platforms, and its security features. Unlike functional testing, which examines whether the system performs specific tasks correctly, non-functional testing ensures the software meets performance, usability, and reliability standards.

Examples:
Performance Testing: Performance Testing evaluates how well an application performs in terms of speed, responsiveness, and stability when used in normal or peak conditions. It helps identify performance bottlenecks, ensuring the application meets required performance standards.

Load Testing: Load Testing assesses how an application performs under expected user loads, such as how it handles a specific number of concurrent users. This testing ensures that the system can handle typical traffic without performance degradation or failures.

Stress Testing: Stress Testing goes beyond load testing by pushing the application beyond its expected load to evaluate how it behaves under extreme or abnormal conditions, such as an overwhelming number of users. The goal is to determine the system's breaking point and how it recovers from failure.

Usability Testing: Usability Testing evaluates how user-friendly and intuitive an application is by testing it with real users. The goal is to identify any difficulties users face when interacting with the system and improve the overall user experience.

Compatibility Testing: It ensures that the application works properly across different devices, browsers, or operating systems. It verifies that users have a consistent experience regardless of the platform they are using.

Security Testing: This testing focuses on identifying vulnerabilities in the application to ensure that it is secure from external threats. It involves testing for common security risks, such as unauthorized access, data breaches, and vulnerabilities in data protection protocols.

3. User Interface (UI) Testing

User Interface (UI) Testing is the process of evaluating the graphical user interface (GUI) of an application to ensure it meets design specifications and provides a smooth, intuitive experience for users. It checks elements like buttons, menus, text boxes, colors, icons, and navigation to confirm that they are visually correct, functional, and easy to use. The goal of UI testing is to ensure that the interface is user-friendly, consistent, and responsive across different devices and screen sizes.

Examples:
Checking alignment, fonts, colors, and spacing.

Verifying buttons, menus, and navigation elements.

Ensuring responsiveness across different screen sizes.

4. Exploratory Testing

Exploratory Testing is an informal testing approach where testers actively explore the application to identify defects without following predefined test cases. In this method, testers use their creativity, experience, and intuition to interact with the application in real-time, discovering issues that might not be covered by formal testing scripts. The goal is to learn about the software while testing it, using feedback from the application to guide further exploration. This type of testing is especially useful for finding unexpected or complex bugs and is often used when there's limited documentation or time constraints. It combines test design and execution in a dynamic, flexible manner.

Examples:
Testing edge cases or unusual scenarios.

Simulating real-world user behavior.

5. Ad-hoc Testing

Ad-hoc Testing is an informal and unstructured testing approach where testers randomly test the application without any formal planning, documentation, or test cases. It relies on the tester’s experience, intuition, and understanding of the application to find defects. Testers often perform Ad-hoc testing by exploring different features or functionalities of the software in an unpredictable way, looking for any issues or bugs that might not be caught in formal test processes.

The goal of Ad-hoc testing is to identify defects that other testing methods may have missed, especially those that arise from unexpected or unconventional user interactions. While it’s not systematic or planned, it can be effective for quickly uncovering flaws in areas that might not be covered by structured tests.

Examples:
a tester might just click on random buttons, enter invalid inputs, or perform unexpected actions to see if the system behaves as expected.

6. Acceptance Testing

Acceptance Testing is the process of verifying whether a software application meets the business requirements and if it’s ready for release. It’s typically performed by the end user or client to ensure that the software functions as expected in real-world scenarios. The goal of acceptance testing is to confirm that the software satisfies the agreed-upon requirements, performs key tasks correctly, and provides the necessary features.

There are two main types of acceptance testing:

  • Alpha Testing: Done by the internal development team or testers before releasing the software to the client.
  • Beta Testing: Performed by a select group of end users outside the development team, who test the software in real-world conditions.

An example of acceptance testing could be a client reviewing an online shopping website to ensure it allows users to browse products, add them to the cart, and complete a purchase without any issues, as specified in the requirements. If the application works as intended, it gets approved for release.

7. Localization Testing

Localization Testing is a type of testing performed to ensure that a software application is properly adapted to different languages, regions, and cultures. This type of testing verifies that the application works correctly in various locales, focusing on language translations, formatting of dates, currencies, and other region-specific elements, such as text direction (e.g., right-to-left for Arabic or Hebrew).

The goal of localization testing is to ensure the software provides a seamless user experience for people in different regions, ensuring that it feels natural and appropriate for the target audience.

Example: For an e-commerce website being localized for France, localization testing would check if:

  • The website displays prices in euros.
  • The date format follows the French standard (DD/MM/YYYY).
  • The text is correctly translated and makes sense in French.
  • Any images, colors, or symbols used are culturally appropriate.

Localization testing helps avoid awkward, confusing, or culturally insensitive issues in the software, ensuring it meets the expectations of users in different regions.

8. Compatibility Testing

Compatibility Testing is a type of testing that ensures a software application works as expected across different environments, including various devices, browsers, operating systems, and network configurations. The goal is to verify that the application provides a consistent experience for users, regardless of the platform or configuration they are using.

This testing is essential because users may access the software through a wide range of devices, browsers, or OS versions, and it’s important to ensure compatibility to avoid performance issues or functional failures.

Example: For a web application, compatibility testing would check:

  • If the application works on different browsers like Chrome, Firefox, Safari, and Edge.
  • If it is compatible with various operating systems such as Windows, macOS, and Linux.
  • If it functions correctly on different devices like smartphones, tablets, and desktops.
  • If the application behaves properly on various screen resolutions and sizes.

The goal of compatibility testing is to ensure that users experience consistent functionality and performance, regardless of their environment.

9. Database Testing

Database Testing is the process of verifying the integrity, consistency, and accuracy of data in a database. It involves checking that the data is stored, retrieved, and updated correctly, and that database operations (such as inserts, deletes, updates, and queries) work as expected. The goal is to ensure that the database performs well, maintains data integrity, and supports the application's functionality.

Examples:
Testing CRUD operations (Create, Read, Update, Delete).
If you're testing an e-commerce website's database, you might check if:

  • Product details (price, name, description) are correctly stored in the database.
  • When a user adds an item to the cart, it appears in the database correctly.
  • The total price calculation in the checkout process matches the data stored in the database.

The goal of database testing is to ensure that the database functions correctly, handles data accurately, and supports the application's needs.

10. Recovery Testing

Recovery testing ensures that the application can handle and recover from unexpected situations, making it more reliable and resilient.

Examples:

If you're testing an online banking system, recovery testing might involve:

  • Simulating a power failure during a transaction to check if the system can recover and complete the transaction without data corruption or loss.
  • Testing whether the system can restore user data from backups after a crash.
  • Checking if the application can handle a server failure by switching to a backup server without interrupting the service for users.

11. Installation Testing

Installation Testing is the process of verifying that a software application can be successfully installed, configured, and uninstalled across various environments and platforms. The goal is to ensure that the installation process is smooth, that the software works properly after installation, and that any necessary configurations (e.g., dependencies, settings) are correctly set up.

Examples:

If you're testing a desktop application, installation testing might involve:

  • Installing the software on different operating systems (e.g., Windows, macOS).
  • Checking if the software runs properly after installation, and if any required third-party tools (like database engines or frameworks) are also installed correctly.
  • Uninstalling the software to confirm it removes all files and settings without leaving behind unwanted components.

12. Accessibility Testing

Accessibility Testing is the process of evaluating a software application to ensure it is usable by people with disabilities, including those with visual, auditory, motor, or cognitive impairments. The goal is to verify that the application complies with accessibility standards and guidelines, such as the Web Content Accessibility Guidelines (WCAG), and that it provides an inclusive experience for all users, regardless of their abilities.

Examples:

If you're testing an e-commerce website, accessibility testing might involve:

  • Checking if all buttons, links, and forms are accessible via keyboard shortcuts.
  • Verifying that a screen reader can read the product descriptions and checkout details.
  • Ensuring that the color scheme is readable for users with color blindness, such as avoiding red-green combinations.
  • Testing that all images (like product photos) have descriptive alt text for users who rely on screen readers.

13. End-to-End Testing

End-to-End (E2E) Testing is a type of testing that verifies the entire flow of an application, from start to finish, to ensure that all components and systems work together as expected. It simulates real user scenarios to validate the integration and functionality of the application in a production-like environment. The goal is to ensure that all parts of the system, including the frontend, backend, databases, and external systems, work seamlessly together.

In E2E testing, testers check if the application meets business requirements and performs the complete workflow accurately, such as logging in, making transactions, or generating reports.

Examples:

Example:
For an online shopping website, End-to-End Testing would involve:

  • Starting from the user visiting the website and browsing products.
  • Adding items to the shopping cart.
  • Proceeding to checkout and entering payment details.
  • Verifying that the payment is processed successfully.
  • Receiving an order confirmation email and tracking the order in the user's account.

The goal of E2E testing is to simulate the full user journey to ensure that the system works as intended across all components and scenarios, providing a smooth and functional experience from start to finish.

14. Negative Testing

Negative Testing is the process of deliberately testing an application with invalid, unexpected, or incorrect inputs to ensure that it handles errors gracefully and doesn't crash or behave unpredictably. The goal is to verify that the software can handle edge cases or scenarios where users might make mistakes or input data that doesn't follow the expected format. Negative testing helps identify potential vulnerabilities or weaknesses in error handling and validation.

Examples:

For an online form that asks for a user's email address, Negative Testing might involve:

  • Entering an invalid email format (e.g., "user@domain" instead of "user@domain.com").
  • Inputting special characters or SQL injection attempts to test how the form handles potentially harmful input.
  • Leaving the email field empty and submitting the form to check if the application displays the appropriate error message.

Conclusion
Manual testing is a critical part of the software development lifecycle. It helps ensure the application is functional, user-friendly, and free of defects. While automation testing is useful for repetitive tasks, manual testing is irreplaceable for exploratory, usability, and ad-hoc testing. By combining different types of manual testing, testers can deliver high-quality software that meets user expectations.

If you found this guide helpful, share it with your peers and start applying these testing techniques in your projects! 🚀

Post a Comment

0 Comments