3.4 Testing and Evaluating Software Solutions

Testing and evaluation is integral to all stages of the software development cycle. 

In this chapter, we focus on higher levels of testing that take place after the project has
been implemented in a programming language. 

Personnel within the software development process perform alpha testing with real data. 

Beta testing occurs when the product is distributed for use to a limited number of outside users. These users are engaged to report any faults or recommendations back to the software development company. 

In this chapter, we are concerned with both alpha and beta testing.

The testing and evaluation process is central to a software development company’s quality assurance. 

Our major aim is to ensure the product meets the original design specifications created during the ‘Defining and Understanding the Problem’ stage. 

In terms of quality assurance we also evaluate the success of the design and development processes. Another purpose of testing is to evaluate the product’s performance against recognised industry standards or benchmarks.


Students Learn About:

Testing the software solution
  • comparison of the solution with the  design specifications
  • generating relevant test data for complex solutions
  • comparison of actual with expected output 
  • levels of testing
– module
- test that each module and subroutine functions correctly
- use of drivers
– program
- test that the overall program (including incorporated modules and subroutines) functions correctly
– system
- test that the overall system (including all programs in the suite) functions correctly, including the interfaces between programs.
- acceptance testing
  • the use of live test data to tensure that the testing environment accurately reflects the expected environment in which the new system will operate
– large file sizes
– mix of transaction types
– response times
– volume of data (load testing)
– effect of the new system on the existing
systems in the environment into which it will
be installed

Reporting on the testing process

  • documentation of the test data and output produced
– use of CASE tools
  • communication with those for whom the solution has been developed, including:
– test results
– comparison with the original design
specifications

Evaluating the software solution

  • verifying the requirements have been met appropriately
  • quality assurance 
Post implementation review
  • facilitation of open discussion and evaluation with the client
  • client sign off process
Students Learn To:


  • differentiate between systems and program test data
  • test your solution with the test data created at the design stage, comparing actual with expected output
  • use drivers and/or stubs to test specific modules and subroutines before the rest of the code is developed
  • recognise the importance of module testing before the module or subroutine is incorporated into the larger solution
  • recognise that while an individual program or module may have been successfully tested, when it is incorporated into a larger system, problems may become apparent 
  • demonstrate the features of a new system to the client
  • assess the new software solution to ensure that it meets the specified quality assurance criteria
  • assess the performance of the new software solution against the criteria specified by the benchmark