 |
Test Manager Roles and
Responsibilities
Improve
accountability with David Bowman’s information management guidelines
for test manager roles and responsibilities
This
site is designed for Information
Technology professionals who need to improve accountability and
require
guidance and direction to establish information management roles.
It provides information management guidelines for test manager roles
and responsibilities
Quality Assurance Test Manager
- Should ensure quality of data warehouse
functionality prior to production cut-over;
- Should work with, or be assigned to, the
project team to develop testing scenarios to ensure expected
performance,
reliability, and functionality;
- Should assist production support in creating
scenarios to diagnose problems and assists in testing resolutions;
- Should maintain test script library for
regression testing; and
- Should be accountable to the program manager
for the quality of all test deliverables.
Planning and Analysis Responsibilities include the following:
Project Test Plan
- Should be created
as part of the project planning phase;
- Should refer to the requirements acceptance criteria,
that were developed as part of the requirements definition phase, and
the test
environments that will be used;
- Should list the overall approach for creating
test
data; and
- should identify all
training requirements.
Project Overview
- Should
summarize project objectives, test
objectives and
scope;
- Should list the scope of testing included in
this plan;
- Should list key test deliverables
e.g. unit test results, system test results;
- Should identify any test assumptions; and
- Should identify any test constraints.
Resource Plan
-
Should document roles and responsibilities for
everyone involved with testing including the test team, the technical
team, the user team and any production support staff that may be
required to help run test processes;
- Should list the physical environments needed to
perform testing, including hardware, software, and database needs and
should identify all requirements for each environment;
-
Should identify the test tools needed to perform
the tests that will be used by the project team including the version
of the tool including test management, defect management and
configuration management;
-
Should address any action required to eliminate the
testing skills gap through training; and
- Should describe the
organizational context, both technical and managerial, within which the
planned project is to be implemented e.g. an organizational chart may
be inserted.
Test Plan
- Should be created for each component of the test plan
e.g. unit test, system test, integration test, regression test, quality
assurance test and user acceptance test;
- Should document scope of each test;
- Should list the entrance criteria that should be met
before this test commences;
- Should specify who will create test cases, how they
will be created and where they will be stored;
- Should describe what test data is required and who
will create it
- Should identify who will create test scripts;
- Should describe how testing will be completed;
- Should describe where test results will be
stored, usually in the test tool; and
- Should list the exit criteria that should be met
before this test is considered complete.
Test Data Strategy
Should list the overall approach for creating
test
data and should try not to use large
volumes of data for unit, system, integration,
regression and quality assurance testing e.g. you can
adequately
complete these tests with a sub-set of data. Failure to do
this will add
significantly to the test period and will not
add
additional
benefit. The extra
time it takes to create a complete set of test data
early in
the project will be paid
back
many times by reducing the time to
back-up/restore data during testing
Testing Communications Plan
- Should describe the
test status reporting process;
- Should define how
defects will be reported and should identify the tool that will be
used, usually the testing tool; and
- Should describe the
severity levels.
Test Schedule
- Should identify test milestones; and
- Should provide a test schedule that clearly defines
when all testing is expected to occur.
Requirements
Acceptance Strategy
- Requirements should be rejected if
they fail the "Smart" test i.e. each requirement must be specific e.g.
objectives should state exactly what
will be achieved, unambiguous, measurable e.g. objectives must
be quantifiable so
that you will know if you have met the requirements, achievable e.g.
there must be a clear link between
the objective and the business case or business plan, realistic e.g.
each requirement should be achievable;
and time-targeted e.g. objectives should specify a
completion date and milestones;
- Should define requirements acceptance criteria as
part of requirements analysis;
- Should approve requirements acceptance criteria
as part of the requirements approval process; and
- Should base line requirements acceptance criteria as
part of the requirements base-lining process.
Test
Data Acquisition Strategy
- Should be completed
at the end of architecture and design;
- Should not use
large
volumes of data for unit, system,
integration,
regression and quality assurance testing;
- Should consider
test data as
configuration item and should base-line it;
- Should only allow
changes to
base-lined test data upon approval of a change request; and
- Should plan
sufficient
on-line storage for test data so that back-up/restore
can occur without waiting for access to time consuming tape archive
backup.
Requirement
Traceability Matrix
- Should specify a requirements
traceability matrix to
outline all testing
required for each requirement;
- Should document how
each
component of architecture and
design documents satisfies requirements; and
- Should be reviewed
prior to commencing
coding to ensure that all requirements have been satisfied and that no
additional "nice to have" requirements have been added.
Test Case Strategy
- Should create and
approve quality assurance
test cases after sign-off and approval of
architecture
and design documents;
- Should ensure that developers create unit test data
which should be should be low volume; and
- Should ensure that
unit testing occurs in a
developers personal schema.
Code Review Strategy
- Should ensure that
all code is the subject of a peer
review;
- Should identify the specific requirements satisfied
by each module in the design
specification;
- Should focus
on reviewing code and
unit test results to provide
additional verification that the code conforms to data movement best
practices and security requirement; and
- Should
verify that
test results confirm that all conditional logic paths were followed and
that all error messages were tested properly.
System Testing Strategies
- Should focus on testing a set
of
components to ensure
that it performs to specification; and
- Should be performed by a senior
developer or lead developer in a system test or
integration test environment.
Integration Testing
strategy
- Should focus
on testing
a software release i.e. a set of
components intended to move to production as a planned release;
- Should be managed
by the data
architect or software designer in an
integration test environment which "mirrors" the intended production
environment;
- Should confirm that
system
and
volume testing is
operating correctly and can handle the required data volumes;
- Should conduct
performance testing to ensure
that data can be loaded in the
available load window; and
- Should conduct
infrastructure and
operations
component testing to verify that all components function
correctly including system back-up, recovery and security audit trails
and tracking.
Security Testing Strategy
- Should be performed in the
production environment under the
direction of the security officer;
- Should certify
compliance with system security and integrity; and
- Should
include a code review to
address potential coding vulnerabilities such as: cross-side scripting
(XSS), injection flaws, particularly SQL injection, malicious
file execution, insecure direct object references, cross-site request
forgery (CSRF), information leakage and improper error handling, broken
authentication and session management, insecure cryptographic storage,
insecure communications and failure to restrict URL access.
Quality Assurance Testing
- Should be
performed by an independent test team in a quality assurance
environment that mirrors the production environment;
- Should test every requirement
and all system, user and production
support documentation;
- Should include reconciliation
tests to manually confirm the validity of data;
- Should include regression testing
to ensure that the new software does not cause
problems with existing software; and
- Should use a sub-set of production
data to reduce time lost to lengthy load processes.
User Acceptance Testing
- Should focus on testing a set
of
requirements to ensure that they meet user expectations; and
- Should be
performed in the quality assurance test environment or in a separate
user acceptance test environment.
Release Test Software Testing Strategies
- Should focus on testing all
release procedures to ensure that all
objects can migrate correctly to the production environment; and
- Should be completed
in a release
testing environment that mirrors production.
Summary…
Quality Assurance Test Manager should be accountable
to the program manager
for the quality of all test deliverables.
This site provided information management guidelines for test manager
roles and responsibilities
|
|
|