Appearance
Questions Bank
| Briefly describe your QA background and projects. | 10 |
|---|---|
| Manual testing experience | 1 |
| Defect tracking and reporting (Jira, TestRail, etc.) | 1 |
| Collaboration with developers and product team | 1 |
| End-to-end QA lifecycle involvement | 1 |
| Exposure to multiple project domains (ERP, web, mobile) | 1 |
| Understanding of SDLC/STLC | 1 |
| Writing and executing test cases | 1 |
| Experience with regression or UAT testing | 1 |
| Clear explanation of QA role and contributions | 1 |
| Measurable QA impact (e.g., reduced defects, improved coverage) | 1 |
| Types of testing you’ve done? | 10 |
|---|---|
| Functional testing | 1 |
| Regression testing | 1 |
| Smoke / Sanity testing | 1 |
| Integration testing | 1 |
| User Acceptance Testing (UAT) | 1 |
| Exploratory testing | 1 |
| Performance / Load testing | 1 |
| API testing (Postman, Swagger, etc.) | 1 |
| Compatibility / Cross-browser testing | 1 |
| Security or Negative testing | 1 |
| How you keep your QA skills updated? | 10 |
|---|---|
| Regularly learning from QA blogs or YouTube tutorials | 1 |
| Following QA communities or forums (e.g., Ministry of Testing, Reddit QA) | 1 |
| Taking online courses or certifications (ISTQB, Udemy, Coursera) | 1 |
| Staying updated with testing tools and trends (e.g., Postman, JMeter, Cypress) | 1 |
| Practicing new test management or automation tools | 1 |
| Learning from peer reviews and retrospectives | 1 |
| Participating in webinars or QA conferences | 1 |
| Reading QA or Agile-related documentation (ISTQB, ISO, Agile testing guides) | 1 |
| Experimenting with side projects or open-source QA tools | 1 |
| Networking and knowledge sharing with QA peers | 1 |
| Difference between regression and retesting. | 10 |
|---|---|
| Regression testing checks for new defects after changes or fixes | 1 |
| Retesting verifies specific defects that were previously fixed | 1 |
| Regression ensures no side effects from new code changes | 1 |
| Retesting confirms that the defect is actually resolved | 1 |
| Regression covers unchanged areas as well | 1 |
| Retesting is limited to failed test cases only | 1 |
| Regression is part of maintenance testing | 1 |
| Retesting is a confirmation testing activity | 1 |
| Regression can be automated | 1 |
| Retesting is usually done manually | 1 |
| Test case design techniques? | 10 |
|---|---|
| Equivalence Partitioning | 1 |
| Boundary Value Analysis (BVA) | 1 |
| Decision Table Testing | 1 |
| State Transition Testing | 1 |
| Use Case Testing | 1 |
| Error Guessing | 1 |
| Exploratory Testing | 1 |
| Pairwise / Combinatorial Testing | 1 |
| Cause-Effect Graphing | 1 |
| Checklist-Based Testing | 1 |
| Severity vs priority difference? | 10 |
|---|---|
| Severity measures the impact of a defect on the system | 1 |
| Priority defines the order in which defects should be fixed | 1 |
| Severity is decided by QA based on functionality impact | 1 |
| Priority is decided by project manager or product owner | 1 |
| Severity shows the technical seriousness of a bug | 1 |
| Priority shows the business urgency to fix the bug | 1 |
| High severity, low priority example: crash in rarely used module | 1 |
| Low severity, high priority example: typo on home page | 1 |
| Severity remains same across builds; priority can change | 1 |
| Both help determine defect triage and release readiness | 1 |
| What’s in a good test case? | 10 |
|---|---|
| Clear and descriptive test case title | 1 |
| Unique test case ID or reference number | 1 |
| Preconditions or setup steps defined | 1 |
| Well-defined test steps in logical order | 1 |
| Expected results clearly stated | 1 |
| Actual results section for execution outcome | 1 |
| Test data specified where applicable | 1 |
| Pass/fail criteria clearly defined | 1 |
| Linked requirement or user story reference | 1 |
| Written in simple, unambiguous language | 1 |
| What is exploratory testing? | 10 |
|---|---|
| Testing without predefined test cases or scripts | 1 |
| Simultaneous learning, test design, and execution | 1 |
| Focuses on discovering unknown or unexpected defects | 1 |
| Tester uses creativity, intuition, and domain knowledge | 1 |
| Helps identify usability and edge-case issues | 1 |
| Often used when documentation is limited or incomplete | 1 |
| Session-based approach with defined time and goals | 1 |
| Encourages tester ownership and critical thinking | 1 |
| Finds issues missed by scripted or automated tests | 1 |
| Documented through notes, mind maps, or session reports | 1 |
| Describe sanity vs smoke testing. | 10 |
|---|---|
| Smoke testing checks basic functionality after build deployment | 1 |
| Sanity testing verifies specific bug fixes or new functionality | 1 |
| Smoke testing is broad and shallow | 1 |
| Sanity testing is narrow and deep | 1 |
| Smoke testing ensures the system is stable enough for further testing | 1 |
| Sanity testing ensures recent changes didn’t break specific areas | 1 |
| Smoke testing is usually planned and documented | 1 |
| Sanity testing is often unscripted and quickly executed | 1 |
| Smoke test is performed on new builds | 1 |
| Sanity test is performed after minor code or defect fixes | 1 |
| Common QA metrics you track? | 10 |
|---|---|
| Defect density (defects per module or size) | 1 |
| Test coverage (percentage of requirements or code tested) | 1 |
| Defect leakage (bugs found after release) | 1 |
| Test execution progress (executed vs planned) | 1 |
| Defect rejection rate (invalid or duplicate bugs) | 1 |
| Automation coverage (percentage of tests automated) | 1 |
| Defect severity and priority distribution | 1 |
| Mean time to detect (MTTD) and Mean time to fix (MTTF) | 1 |
| Pass/fail rate of test cases | 1 |
| Customer-found defect count or UAT defect count | 1 |
| What is test plan content? | 10 |
|---|---|
| Test plan identifier and version | 1 |
| Scope and objectives of testing | 1 |
| Features to be tested and not tested | 1 |
| Test strategy and approach (manual/automation) | 1 |
| Test environment and configuration details | 1 |
| Roles and responsibilities of team members | 1 |
| Entry and exit criteria for testing phases | 1 |
| Test deliverables (reports, logs, summaries) | 1 |
| Testing schedule and milestones | 1 |
| Risk assessment and mitigation plan | 1 |
| What key metrics do you measure in performance testing? | 10 |
|---|---|
| Response time (average and percentile) | 1 |
| Throughput (requests or transactions per second) | 1 |
| Latency (delay between request and response) | 1 |
| Concurrent users or load (active sessions) | 1 |
| Error rate (failed transactions or HTTP errors) | 1 |
| CPU utilization | 1 |
| Memory usage | 1 |
| Network bandwidth or throughput consumption | 1 |
| Disk I/O performance | 1 |
| Peak response time and system bottlenecks | 1 |
| What’s the difference between load testing and stress testing? | 10 |
|---|---|
| Load testing checks performance under expected user load | 1 |
| Stress testing checks system behavior beyond maximum capacity | 1 |
| Load testing focuses on stability and throughput | 1 |
| Stress testing focuses on system failure and recovery | 1 |
| Load testing identifies performance bottlenecks under normal usage | 1 |
| Stress testing identifies the system’s breaking point | 1 |
| Load testing ensures acceptable response time under expected load | 1 |
| Stress testing ensures system stability under extreme conditions | 1 |
| Load testing is part of performance tuning | 1 |
| Stress testing is part of reliability and failure analysis | 1 |
| How do you analyze performance test results? | 10 |
|---|---|
| Compare actual results with baseline or SLA targets | 1 |
| Identify performance bottlenecks (CPU, memory, DB, network) | 1 |
| Analyze response time trends and percentile distributions | 1 |
| Review throughput, error rate, and concurrency patterns | 1 |
| Correlate metrics from application, server, and database logs | 1 |
| Identify the breaking point or saturation level | 1 |
| Monitor resource utilization across components | 1 |
| Document observations, anomalies, and trends | 1 |
| Recommend optimization or tuning actions | 1 |
| Prepare and share summary report with key findings and graphs | 1 |