Monday, August 13, 2007

Manual Testing

What makes a good test engineer?


A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.


A good QA, test, or QA/Test(combined) manager should:

• be familiar with the software development process

• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite

• what is a somewhat 'negative' process (e.g., looking for or preventing problems)

• be able to promote teamwork to increase productivity

• be able to promote cooperation between software, test, and QA engineers

• have the diplomatic skills needed to promote improvements in QA processes

• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to

• have people judgement skills for hiring and keeping skilled personnel

• be able to communicate with technical and non-technical people, engineers, managers, and customers.

• be able to run meetings and keep them focused


What steps are needed to develop and run software tests?


The following are some of the steps to consider:


• Obtain requirements, functional design, and internal design specifications and other necessary documents

• Obtain budget and schedule requirements

• Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) • Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests • Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

• Determine test environment requirements (hardware, software, communications, etc.)

• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

• Determine test input data requirements

• Identify tasks, those responsible for tasks, and labor requirements

• Set schedule estimates, timelines, milestones

• Determine input equivalence classes, boundary value analyses, error classes

• Prepare test plan document and have needed reviews/approvals

• Write test cases • Have needed reviews/inspections/approvals of test cases

• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

• Obtain and install software releases

• Perform tests • Evaluate and report results

• Track problems/bugs and fixes • Retest as needed

• Maintain and update test plans, test cases, test environment, and testware through life cycle


What's a 'test case'?


• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.


What is 'configuration management'?


Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the 'Tools' section for web resources with listings of configuration management tools. Also see the Bookstore section's 'Configuration Management' category for useful books with more information.)


What should be done after a bug is found?


The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process: • Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.


• Bug identifier (number, ID, etc.)

• Current bug status (e.g., 'Released for Retest', 'New', etc.)

• The application name or identifier and version

• The function, module, feature, object, screen, etc. where the bug occurred

• Environment specifics, system, platform, relevant hardware specifics • Test case name/number/identifier • One-line bug description

• Full bug description • Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool • Names and/or descriptions of file/data/messages/etc. used in test

• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

• Was the bug reproducible?

• Tester name • Test date

• Bug reporting date

• Name of developer/group/organization the problem is assigned to

• Description of problem cause

• Description of fix

• Code section/file/module/class/method that was fixed

• Date of fix

• Application version that contains the fix

• Tester responsible for retest • Retest date

• Retest results • Regression testing requirements

• Tester responsible for regression tests

• Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.


How can it be known when to stop testing?


This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

• Deadlines (release deadlines, testing deadlines, etc.)

• Test cases completed with certain percentage passed

• Test budget depleted

• Coverage of code/functionality/requirements reaches a specified point

• Bug rate falls below a certain level

• Beta or alpha testing period ends


What can be done if requirements are changing continuously?


A common problem and a major headache.

• Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.

• It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.

• If the code is well-commented and well-documented this makes changes easier for the developers.

• Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.

• The project's initial schedule should allow for some extra time commensurate with the possibility of changes.

• Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.

• Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.

• Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.

• Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.

• Try to design some flexibility into automated test scripts.

• Focus initial automated testing on application aspects that are most likely to remain unchanged.

• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

• Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

• Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).



What if the application has functionality that wasn't in the requirements?


It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.


What if an organization is growing so fast that fixed QA processes are impossible?


This is a common problem in the software industry, especially in new technology areas.

There is no easy solution in this situation, other than:

• Hire good people

• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

• Everyone in the organization should be clear on what 'quality' means to the customer


How does a client/server environment affect testing?


Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.)


How can World Wide Web sites be tested?


Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort.


Other considerations might include:

• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

• Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

• What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

• Will down time for server and content maintenance/upgrades be allowed? how much?

• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?

• How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing? • What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? • Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?

• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??

• How will internal and external links be validated and updated? how often?

• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing? • How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section. Some usability guidelines to consider - these are subjective and may or may not apply to a given situation


(Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):

• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

• The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.

• Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

• All pages should have links external to the page; there should be no dead-end pages.

• The page owner, revision date, and a link to a contact person or organization should be included on each page. Many new web site test tools have appeared in the recent years and more than 280 of them are listed in the 'Web Test Tools' section.


How is testing affected by object-oriented designs?


Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.


What is 'Software Testing'?


Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of useful books on Software Testing.) • Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.


What is SEI? CMM? ISO? IEEE? ANSI? Will it help?


SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.


CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.


Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.


Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.


Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.


Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.


Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.


Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.• ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/



• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others. • ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). • Other software development process assessment methods besides CMM and ISO 9000 include SPICE, Trillium, TickIT. and Bootstrap.


No comments: