Saturday, November 22, 2008

Code review, Inspection & Walkthrough

Code review

Code review is a phase in the software development process in which the authors of code, peer reviewers, and perhaps quality assurance (QA) testers get together to review code. Finding and correcting errors at this stage is relatively inexpensive and tends to reduce the more expensive process of handling, locating, and fixing bugs during later stages of development or after programs are delivered to users.

Reviewers read the code line by line to check for:
  • Flaws or potential flaws
  • Consistency with the overall program design
  • The quality of comments
  • Adherence to coding standards.

Code review may be especially productive for identifying security vulnerabilities. Specialized application programs are available that can help with this process. Automated code reviewing facilitates systematic testing of source code for potential trouble such as buffer overflows, race conditions, memory leakage, size violations, and duplicate statements. Code review is also commonly done to test the quality of patches.

Inspection

An inspection is, most generally, an organized examination or formal evaluation exercise. It involves the measurements, tests, and gauges applied to certain characteristics in regard to an object or activity. The results are usually compared to specified requirements and standards for determining whether the item or activity is in line with these targets. Inspections are usually non-destructive.


Non-Destructive Examination (NDE) or Non-Destructive Testing (NDT) describe a number of technologies used to analyze materials for either inherent flaws or damage from use. Some common methods are visual, Liquid or Dye Penetrant, Magnetic Particle, Radiography, Ultrasonics, eddy Current, Acoustic Emission and Thermography. In addition, many non-destructive inspections can be performed by a precision scale, or when in motion, a checkweigher.


A surprise inspection tends to have different results than an announced inspection. Leaders seeking to discover how well lower echelons in their organization are typically doing sometimes drop in unannounced to see what is going on and what conditions are. When an inspection is scheduled in advance, it gives people a chance to cover up or fix mistakes. A surprise inspection, therefore, gives inspectors a better picture of the typical state of the inspected object than an announced inspection.

Walkthrough

A walkthrough is a term describing the consideration of a process at an abstract level.


The term is often employed in the software industry (see software walkthrough) to describe the process of inspecting algorithms and source code by following paths through the algorithms or code as determined by input conditions and choices made along the way. The purpose of such code walkthroughs is generally to provide assurance of the fitness for purpose of the algorithm or code; and occasionally to assess the competence or output of an individual or team.


The term is employed in the theatrical and entertainment industry to describe a rehearsal where the major issues of choreography and interaction are practiced and resolved, prior to more formal "dress rehearsals".


The term is often used in the world of learning where a tutor/trainer will walk through the process for the first time. It is regarded as a literal walk through of the learning at the groups pace ensuring that everyone takes in the new knowledge and skills.


Something akin to walkthroughs are used in very many forms of human endeavour since the process is a thought experiment that seeks to determine the likely outcome(s) of an affair based on starting conditions and the effects of decisions taken.

Static testing

Static testing is a form of software testing where the software isn't actually used. This is in contrast to dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.


Even static testing can be automated. A static testing test suite consists in programs to be analyzed by an interpreter or a compiler that asserts the programs syntactic validity.

Bugs discovered at this stage of development are less expensive to fix than later in the development cycle.

Saturday, November 8, 2008

A Waterfall Test Process

A Waterfall Test Process can be represented as figure shown below:

The description of each state is defined as table shown below:



I get the above information from Rapid Testing which is written by Chris Brown, Gary Cobb, Robert Culbertson.



Verification and Validation (V&V)

There are two basic functions of software testing: one is verification and the other is validation. Schulmeyer and Mackenzie (2000) define verification and validation (V&V) as follows:

Verification is the assurance that the products of a particular phase in the development process are consistent with the requirements of that phase and the preceding phase.

Validation is the assurance that the final product satisfies the system requirements.

The purpose of validation is to ensure that the system has implemented all requirements, so that each function can be traced back to a particular customer requirement. In other words, validation makes sure that the right product is being built.

Verification is focused more on the activities of a particular phase of the development process. For example, one of the purposes of system testing is to give assurance that the system design is consistent with the requirements that were used as an input to the system design phase. Unit and integration testing can be used to verify that the program design is consistent with the system design. In simple terms, verification makes sure that the product is being built right. We'll see examples of both verification and validation activities as we examine each phase of the development process in later chapters.

I get the above information from Rapid Testing which is written by Chris Brown, Gary Cobb, Robert Culbertson.



Saturday, October 25, 2008

What is Unit Testing?

In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class.

Ideally, each test case is independent from the others; mock objects and test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by developers and not by Software testers or end-users.

The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.

The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit.

For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:
  • Is the error due to a defect in unit 1?
  • Is the error due to a defect in unit 2?
  • Is the error due to defects in both units?
  • Is the error due to a defect in the interface between the units?
  • Is the error due to a defect in the test?
Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole.

Drivers and stubs can be reused so the constant changes that occur during the development cycle can be retested frequently without writing large amounts of additional test code. In effect, this reduces the cost of writing the drivers and stubs on a per-use basis and the cost of retesting is better controlled.

Wednesday, September 24, 2008

Software Testing Type

· Black box testing - You don't need to know the internal design or have deep knowledge about the code to conduct this test. It's mainly based on functionality and specifications, requirements.

· White box testing - This test is based on knowledge of the internal design and code. Tests are based on code statements, coding styles, etc.

· unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.

· incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

· integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

· functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

· system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

· end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

· sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

· regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

· acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

· load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

· stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

· performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

· usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

· install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

· recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

· failover testing - typically used interchangeably with 'recovery testing'

· security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

· compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

· exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

· ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

· context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.

· user acceptance testing - determining if software is satisfactory to an end-user or customer.

· comparison testing - comparing software weaknesses and strengths to competing products.

· alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

· beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

· mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Credit to: http://www.asknumbers.com/QualityAssuranceandTesting.aspx

Tuesday, September 16, 2008

Introduction to Software Testing

What is Software Testing?

Software Testing is an activity being part of the software development process aimed at evaluating a software item (system, subsystem, unit etc.) features (functionality, performance etc.) against the given set of system requirements. Software testing implies running the software item in predetermined conditions (test case, test scenario), recording and analyzing the obtained results, and identifying errors (i.e. bugs), which means failure to satisfy a set of requirements to the software.

Software Testing Fundamentals

During testing the software engineering produces a series of test cases that are used to “rip apart” the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified.

Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.