IS "TESTING IS DOUBTING"?

"Testing is doubting!" is a saying that many developers know and use humorously in France. It's often the response when someone is unsure how to admit that they "haven’t really established a testing policy," that "unit tests took too much time and slowed down development," that "they commented them out because the application wouldn't compile anymore," or even "we don’t do them; our team isn’t sized for it." (All these responses have been heard from clients). So why test? How should we test? This is what we’ll explore in this article.
Why Test?
Software testing is essential for many reasons:
- Quality Assurance:
Testing ensures that the software meets specifications and user expectations. It guarantees that the final product is of high quality, free of critical defects, and meets functional needs. - Early Bug Detection:
Identifying defects early in the development process allows for corrections before they become costly or difficult to fix. Testing reduces the risk of major failures when deploying to production. - Cost Reduction:
Fixing a bug in the later stages of development or, even worse, in production, can be extremely expensive. Proactively testing reduces the costs associated with maintenance and post-delivery support. - Security:
Testing, especially security testing, helps identify vulnerabilities that could be exploited by attackers. This is crucial for protecting user data and the integrity of the system. - Regulatory Compliance:
In certain industries, testing is required to comply with specific standards and regulations, ensuring the software meets legal requirements.
When Should We Test?
In general, testing should be part of the development process. The concept of TDD (Test-Driven Development) has been discussed for over a decade now. This approach, where writing tests upfront drives feature development, requires a certain level of organization but has proven effective. However, even without fully implementing a TDD approach, setting up tests is still the best practice to ensure the long-term viability of an application.
The only situation where one might question investing in tests is during a POC (Proof of Concept) phase if the development is disposable and/or will be reworked after POC validation. But be cautious—this only applies to truly disposable POCs, as many POCs end up becoming applications, often without tests and with uncertain futures.
In all other cases → We test!
Different Types of Tests
There are a wide variety of software tests, each with a specific purpose. They are often grouped into two main categories:
- Functional Tests: These tests aim to verify the functional aspects of an application—checking that the various features work as expected.
- These tests can be executed manually by teams running functional scenarios or through automation tools.
- Non-Functional Tests: These tests aim to verify the non-functional aspects of a software application, such as its performance, usability, reliability, security, etc.
Functional Tests:
- Unit Tests:
Goal: Verify the smallest units of code, such as functions or methods.
When: During the development phase, usually automated.
Advantage: Quickly detect errors at a very granular level.
Unit tests are implemented by the development team within the application, covering the application’s functions. The finer the test coverage and the more thorough the mesh, the quicker and more precise the identification and correction of an anomaly in case of regression.Unit tests are the cornerstone of any testing approach. If you’re going to invest in just one type of test initially, it should be this one, as revisiting them later is more labor-intensive and costly. - Integration Tests:
Goal: Ensure that the different modules or services work well together.
When: After unit tests, once the various components of the system are integrated.
Advantage: Detects issues related to the interaction between different modules.
Once modules are integrated, it's critical to test their functionality as a combined unit. Integration tests verify that combined modules in an application work together properly, and they are particularly important for distributed systems and client-server applications. - Interface Tests or API Tests:
Goal: Verify the communication between two remote software systems.
When: After integration tests, before production.
Advantage: Detects communication anomalies between two distinct systems.
In a microservices development approach, this test is crucial to ensure the functionality of various exposed services and the interface contract. - Regression Tests:
Goal: Ensure that new changes have not introduced new bugs into previously tested features.
When: After every major change or software update.
Advantage: Ensures product stability as it evolves.
Regression testing checks whether new developments in a new version cause functional regressions compared to the previous version. This test, which should be run before each new production release, can be time-consuming as the application grows. Fortunately, automation is available and is often a best practice in the long term. - Acceptance Tests or UAT (User Acceptance Tests):
Goal: Validate that the software meets the needs and expectations of the end users.
When: At the end of the development cycle, before production.
Advantage: Ensures that the product is ready to be delivered and accepted by the client or users.
Acceptance tests (UAT) are conducted after the development of new features to validate that they conform to expectations. Acceptance criteria define how these expectations are met, through various scenarios. - Exploratory Tests:
Goal: Allow testers to proactively discover bugs by exploring the software without predefined scripts.
When: At any point, generally after formal tests.
Advantage: Complements automated tests by identifying unexpected issues.
Non-Functional Tests:
- Performance Tests:
Goal: Evaluate the system’s responsiveness, stability, and scalability under different loads.
When: Before production and during significant load increases.
Advantage: Identifies bottlenecks and ensures that the system can handle traffic spikes. - Security Tests:
Goal: Identify potential vulnerabilities and security flaws in the software.
When: Continuously throughout the development cycle and before production.
Advantage: Protects user data and strengthens the product’s overall security.
Implementing Software Testing
Implementing software tests requires careful planning and a systematic approach:
- Define a Testing Strategy:
Start by creating a clear testing strategy that defines which tests to perform, which tools to use, and the success criteria. This strategy should align with the project’s objectives. - Choose Testing Tools:
Select tools suitable for the different types of tests you plan to carry out. For instance, JUnit or TestNG for unit tests, Selenium for UI tests, or JMeter for performance tests. - Automate Tests:
Automation is essential for unit, integration, and regression tests. It saves time, reduces human error, and enables more efficient repetitive testing. - Set Up a Testing Environment:
Create testing environments that closely replicate production conditions, including hardware, software, network configurations, and databases. - Write Test Cases:
Write detailed test cases that cover all possible use scenarios, including both positive and negative cases. Each test case should have clear acceptance criteria. - Execute the Tests:
Execute tests according to the test plan and record results. Make sure to document detected bugs, their priority, and their resolution status. - Analyze the Results:
After each testing cycle, analyze the results to identify trends and risk areas. Use this data to improve the development and testing strategies. - Retrospectives and Continuous Improvement:
Hold regular retrospectives to discuss lessons learned and identify areas for improvement. The testing process should evolve continually to meet the project’s needs.
Best Practices in Software Testing
Here are some best practices to ensure effective software testing:
- Involve Testers Early:
Involve testers early in the project, ideally during the planning phase, to ensure that tests align with project requirements. - Prioritize Tests:
Not all features are equal in terms of criticality. Prioritize tests based on the most critical features for the user and the highest risks. - Keep Documentation Up to Date:
Ensure that test documentation remains current and accurately reflects the software’s current state, including test cases, automated scripts, and bug reports. - Combine Manual and Automated Tests:
Automated tests are great for repetitive tasks, but manual tests remain essential for complex scenarios or exploratory testing. A combination of both provides the best coverage. - Conduct Code Reviews:
Integrate code reviews into the development process. This helps catch potential errors before the code enters formal testing. - Test in Real Conditions:
Whenever possible, conduct tests under real-world conditions to simulate the user experience, including tests on different devices, browsers, and network configurations. - Adopt a Quality Culture:
Promote a culture where quality is everyone’s responsibility, not just the testers'. Every team member should be committed to delivering a high-quality product.
Cost and Impact on Productivity
The cost of implementing tests can vary depending on several factors:
- Type of Tests: Unit tests are generally inexpensive, while performance and security tests may require costly tools.
- Automation vs. Manual Testing: Automation allows for rapid, repetitive testing, but the initial setup of automated testing tools and scripts can be costly.
- Software Complexity: The more complex the software, the more expensive it will be to test thoroughly.
- Test Frequency: The more frequently the software is tested, the higher the maintenance costs.
- Training and Human Resources: Testing requires skilled testers, and training can add to the costs.
Cost Estimate: On average, testing can represent 20% to 40% of the total software development budget, but this can vary based on complexity and specific requirements.
The cost that’s harder to estimate is the cost of not testing (see our article on the importance of COI for more details)! Yet, this is the cost that should outweigh the initial investment in a testing approach.
Conclusion
Software testing is a crucial element in ensuring the quality, security, and reliability of digital products. By understanding the different types of tests, effectively integrating them into the development cycle, and following best practices, teams can reduce risks, improve user satisfaction, and guarantee the success of their software projects. Software testing is not just a formality, but an essential part of creating robust and durable products.
So yes, testing is indeed doubting, but perhaps doubting a potential future for your application if it’s not covered by tests. In every project I’ve been involved in, I’ve always closely examined the tests. An application without at least minimal test coverage has no value in my eyes. It's an application we cannot control over time, cannot protect against regression, and will demand, with each version, significant human investment, which will ultimately prove more costly than the initial setup costs.
In the beginning, we make the choice for speed, to trust the developers… and versions follow, turnover happens… and the first regression happens, then two, three… support is on fire, users lose patience and trust in the application, and the negative communication around the tool struggles to recover it.
I’ve seen many (too many?) scenarios like this! In a large organization, we can get out of this mess by replacing the tool, but in a small company, it’s a significant blow, a major loss that sometimes signals the end of the adventure.