Monday, July 2, 2007

Software Testing Interview Questions - 1(b)

How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be performed. Common factors in deciding when to stop are:
• Deadlines achieved (release deadlines, testing deadlines, etc.
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Defect rate falls below a certain level
• Beta or Alpha testing period ends

What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Figure out which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors?
What do the developers think are the highest-risk aspects of the application?
Which tests will have the best high-risk-coverage to time-required ratio?
What if the software has so many bugs that it can't really be tested at all?

Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem

If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/

How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities.
Does it matter how much the software has been tested already?
No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
• Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
• Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
• High stability (bugs are expected to be difficult to find, indicating already well tested)
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.

Will automated testing tools make testing easier?
A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups.
Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.

Why outsource testing?
Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive. - Testing a software application now involves a variety of skills.

Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.

Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.

Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.

Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.

If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/


What steps are needed to develop and run software tests?
The following are some of the steps needed to develop and run software tests:
• Obtain requirements, functional design, and internal design specifications and other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
• Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications, etc.)
• Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
• Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
Maintain and update test plans, test cases, test environment, and test ware through life cycle

What is Software Testing ?

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.In other words Testing is nothing but CRITICISM or COMPARISION.Here comparison in the sense comparing the actual value with expected one.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.


If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/

Software testing is a crucial step in the development of any application. Unfortunately, time is not often on the side of the software testing squad. Fighting inflexible time frames and ship dates that have been set in stone by forces outside the development team, the software testers slog on to the deadline, with the honorable goal of finding and documenting every bug. I've been on a good number of outside software testing teams over the years ... and it always seems to go the same way.

No matter the program, the beta testers find something that they feel needs to get fixed before the disc goes gold. Alas, there's never time to fix all the bugs, and the program gets shipped, warts and all. In a triage scenario, veteran software testers know that it's inevitable that something has to slip through. They just hope that the most heinous nasties are squashed before the final release candidate.

Once the program ships, it doesn't take long for the user base to find the bugs. The unwitting and most vocal can tend to blame the Quality Assurance team and the beta software testing effort. "Why didn't you find and fix this?" they ask, without having the insight to consider the forces that come to bear.

When it's gotta ship, it's gotta ship. Ready-or-not, here it goes ...

By the time the shrink wrap hits the street, those last minute bugs may actually be fixed ... but the fixes won't be on the CD that shipped in the first boxes. They may be slipstreamed into subsequent releases, with patches made available over the Internet.

As a user (and a grizzled software tester), I've learned not to immediately jump on and install a release the day it comes out, when possible. I usually wait for the first round of fixes to ship before I roll the dice ...

If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/

Software Testing Interview Questions - 1(a)

What is 'Software Testing'?
Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.

What is 'Software Quality Assurance'?
Software Quality Assurance involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

What is the 'Software Quality Gap'?
The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.

If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/

What is Equivalence Partitioning?
In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.

What is Boundary Value Analysis?
It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.

If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/


Why does software have bugs?
Miscommunication or no communication - understand the application's requirements.
Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
Programming errors - programmers "can" make mistakes.
Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs.
Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.

What does "finding a bug" consist of?
Finding a bug consists of number of steps that are performed:
Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed

What will happen about bugs that are already known?
When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.

What's the big deal about 'requirements'?
Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

What can be done if requirements are changing continuously?
A common problem and a major headache.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)


If you need more "Jobs" / "Placement Papers" Click & Subscribe now at http://finance.groups.yahoo.com/group/onestop_jobs/