Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

What is Continuous Testing?

Continuous testing as a practice is actually what you do when you’re doing test-driven development properly. It provides tool support for automatically detecting when you save a code file and then running the entire available test suite and report the result back to you. Consequently it somewhat changes the way you do test-driven development because you never need to think about running your tests again. You simply keep on coding, adding another test, implementing it and meanwhile the red light has lit up and switched to green again as you’ve made it pass.
It is a pain in the ass to build, maintain, and run automated tests. So, why do we do it?

  • Automated tests help you release faster by reducing the amount of manual testing needed for each release. That is why automated tests and continuous integration are essential if you release more than once every two weeks. You may find that release cycles are getting longer because it takes more and more time to test increasingly complex systems. This is a signal that you need automated testing.
  • By providing immediate feedback, automated tests give programmers the confidence to make changes. If your developers spend their time fixing a lot of small things, but don’t have the confidence to make significant changes, you need automated testing. However, while automated tests will tell you if you broke something that used to work, they are not very good at finding bugs in new features.
  • An automated test is a script that looks for errors. It runs some of your code, and tells you whether it works as expected, or throws an error. We call it “continuous integration” when we set up servers to run automated tests frequently.
  • Building and maintaining automated tests is a lot of work. To get a good return on investment, use these measures of efficiency to evaluate your testing:
  • Find real problems: The automated testing program should find enough real problems to be worth the effort. You can measure this. Your testing is working if you are giving programmers the confidence to make changes and reducing your time to test a release.
  • Avoid false alarms: Many times a test will show a failure because of an intended change, and you will have to go back and modify the test. That’s a waste of energy. Later in this chapter we will show you how to select types of tests that minimize this waste.

Continuous Integration and Delivery recommends the Automated compilation, high standard unit tests, integration tests, agile testing, sign-off.

The role of the QA has mutated to that of QABA (aka ‘a bloody good BA’) – domain experts that represent the business in the delivery team but who are also responsible for creating the acceptance criteria for user stories where the acceptance criteria are expressed as scenarios that can be easily converted into actual test code.

Developers write the application code and the code that tests it, including creating any tooling. The QABA can then sign off the story on completion without having to go back to the business. Business still see new features and capabilities at weekly demos and show’n’tells but are rarely involved with the delivery team on a regular basis. That way the entire company gets to do UAT on new functionality before it hits the customers.

A fundamental pillar of continuous delivery is that all* your tests must be automated. To achieve this the QA organization should be in the business of writing the test scenarios that the code needs to be evaluated against and for signing off that the code does this. Test code should be a first class citizen of the application and should be written by people who’s primary job is writing code – the developers. I will say it again – QA should not be in the business of writing test code.

Developers are responsible for quality and should act like it. Sometimes that means taking responsibility from the QA organisation that should never should have been given to them. Quality is too important to leave to QA. Developers need to take full responsibility for the quality of their code and they should be in the firing line if something is broken.

The role of the QA is to keep the developers on the straight and narrow and the most effective way of doing this is to get them to apply their confrontational mindset to the code via the acceptance criteria used to sign off the new functionality.

The Types of Testing which must or should be automated to accomplish the continuous delivery.

Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

Acceptance testing – Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Rajesh Kumar
Follow me
Latest posts by Rajesh Kumar (see all)