Verification is the confirmation that the product meets identified specifications. Checks for uncovered requirements.
Validation is the confirmation that the product approximately meets its design functions (User requirements). Checks or executes functionality.
Differences between verification and validation
|Verification is the confirmation that the product meets identified specifications. Checks for uncovered requirements.
Verification ensures that you are building a product according to its requirements, specifications and standards.
|Validation is the confirmation that the product approximately meets its design functions (User requirements). Checks or executes functionality.|
|Are we building the product right?(Efficiency)
Does the system satisfy its specification?
|Are we building the right product? (Effectiveness)
Does the specification satisfy the customer’s expectations?
|Verification evaluates documents, plans, code, requirements and specifications.||Validation on the other hand evaluates the product itself.|
|Inputs of verifications are checklists, issue lists, walkthroughs, inspection meetings and reviews.||The inputs of validations are test data and test cases itself.|
|The output of verification is nearly perfect set of documents, plans, specifications and requirement docs.||The output of validation is nearly a perfect product.|
|Methods: Inspection, Walkthrough and Buddy Checks.|
Some of software testing Definitions:
Its similar to system testing and involves the testing of a complete application environment in a situation that mimics practical use, as interacting with a database, use the network communications, Generally ensures that all aspects of the business are supported by the systems under test. That is that system components integrate together correctly and that the right information is passed between them. Dependencies may not have been identified and incorrect information could be passed between system components and systems.
Sanity testing or Smoke testing
Normally the initial testing effort to determine if a new software version is performing well enough to accept it for a major testing efforts. E.g. if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases,
Re-testing after fixes of the software bug or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be useful for this type of testing.
Final testing based on specifications of the end user , or based on use by customers over some limited period of time.