Skip to content

Questions Bank

Briefly describe your QA background and projects.10
Manual testing experience1
Defect tracking and reporting (Jira, TestRail, etc.)1
Collaboration with developers and product team1
End-to-end QA lifecycle involvement1
Exposure to multiple project domains (ERP, web, mobile)1
Understanding of SDLC/STLC1
Writing and executing test cases1
Experience with regression or UAT testing1
Clear explanation of QA role and contributions1
Measurable QA impact (e.g., reduced defects, improved coverage)1
Types of testing you’ve done?10
Functional testing1
Regression testing1
Smoke / Sanity testing1
Integration testing1
User Acceptance Testing (UAT)1
Exploratory testing1
Performance / Load testing1
API testing (Postman, Swagger, etc.)1
Compatibility / Cross-browser testing1
Security or Negative testing1
How you keep your QA skills updated?10
Regularly learning from QA blogs or YouTube tutorials1
Following QA communities or forums (e.g., Ministry of Testing, Reddit QA)1
Taking online courses or certifications (ISTQB, Udemy, Coursera)1
Staying updated with testing tools and trends (e.g., Postman, JMeter, Cypress)1
Practicing new test management or automation tools1
Learning from peer reviews and retrospectives1
Participating in webinars or QA conferences1
Reading QA or Agile-related documentation (ISTQB, ISO, Agile testing guides)1
Experimenting with side projects or open-source QA tools1
Networking and knowledge sharing with QA peers1
Difference between regression and retesting.10
Regression testing checks for new defects after changes or fixes1
Retesting verifies specific defects that were previously fixed1
Regression ensures no side effects from new code changes1
Retesting confirms that the defect is actually resolved1
Regression covers unchanged areas as well1
Retesting is limited to failed test cases only1
Regression is part of maintenance testing1
Retesting is a confirmation testing activity1
Regression can be automated1
Retesting is usually done manually1
Test case design techniques?10
Equivalence Partitioning1
Boundary Value Analysis (BVA)1
Decision Table Testing1
State Transition Testing1
Use Case Testing1
Error Guessing1
Exploratory Testing1
Pairwise / Combinatorial Testing1
Cause-Effect Graphing1
Checklist-Based Testing1
Severity vs priority difference?10
Severity measures the impact of a defect on the system1
Priority defines the order in which defects should be fixed1
Severity is decided by QA based on functionality impact1
Priority is decided by project manager or product owner1
Severity shows the technical seriousness of a bug1
Priority shows the business urgency to fix the bug1
High severity, low priority example: crash in rarely used module1
Low severity, high priority example: typo on home page1
Severity remains same across builds; priority can change1
Both help determine defect triage and release readiness1
What’s in a good test case?10
Clear and descriptive test case title1
Unique test case ID or reference number1
Preconditions or setup steps defined1
Well-defined test steps in logical order1
Expected results clearly stated1
Actual results section for execution outcome1
Test data specified where applicable1
Pass/fail criteria clearly defined1
Linked requirement or user story reference1
Written in simple, unambiguous language1
What is exploratory testing?10
Testing without predefined test cases or scripts1
Simultaneous learning, test design, and execution1
Focuses on discovering unknown or unexpected defects1
Tester uses creativity, intuition, and domain knowledge1
Helps identify usability and edge-case issues1
Often used when documentation is limited or incomplete1
Session-based approach with defined time and goals1
Encourages tester ownership and critical thinking1
Finds issues missed by scripted or automated tests1
Documented through notes, mind maps, or session reports1
Describe sanity vs smoke testing.10
Smoke testing checks basic functionality after build deployment1
Sanity testing verifies specific bug fixes or new functionality1
Smoke testing is broad and shallow1
Sanity testing is narrow and deep1
Smoke testing ensures the system is stable enough for further testing1
Sanity testing ensures recent changes didn’t break specific areas1
Smoke testing is usually planned and documented1
Sanity testing is often unscripted and quickly executed1
Smoke test is performed on new builds1
Sanity test is performed after minor code or defect fixes1
Common QA metrics you track?10
Defect density (defects per module or size)1
Test coverage (percentage of requirements or code tested)1
Defect leakage (bugs found after release)1
Test execution progress (executed vs planned)1
Defect rejection rate (invalid or duplicate bugs)1
Automation coverage (percentage of tests automated)1
Defect severity and priority distribution1
Mean time to detect (MTTD) and Mean time to fix (MTTF)1
Pass/fail rate of test cases1
Customer-found defect count or UAT defect count1
What is test plan content?10
Test plan identifier and version1
Scope and objectives of testing1
Features to be tested and not tested1
Test strategy and approach (manual/automation)1
Test environment and configuration details1
Roles and responsibilities of team members1
Entry and exit criteria for testing phases1
Test deliverables (reports, logs, summaries)1
Testing schedule and milestones1
Risk assessment and mitigation plan1
What key metrics do you measure in performance testing?10
Response time (average and percentile)1
Throughput (requests or transactions per second)1
Latency (delay between request and response)1
Concurrent users or load (active sessions)1
Error rate (failed transactions or HTTP errors)1
CPU utilization1
Memory usage1
Network bandwidth or throughput consumption1
Disk I/O performance1
Peak response time and system bottlenecks1
What’s the difference between load testing and stress testing?10
Load testing checks performance under expected user load1
Stress testing checks system behavior beyond maximum capacity1
Load testing focuses on stability and throughput1
Stress testing focuses on system failure and recovery1
Load testing identifies performance bottlenecks under normal usage1
Stress testing identifies the system’s breaking point1
Load testing ensures acceptable response time under expected load1
Stress testing ensures system stability under extreme conditions1
Load testing is part of performance tuning1
Stress testing is part of reliability and failure analysis1
How do you analyze performance test results?10
Compare actual results with baseline or SLA targets1
Identify performance bottlenecks (CPU, memory, DB, network)1
Analyze response time trends and percentile distributions1
Review throughput, error rate, and concurrency patterns1
Correlate metrics from application, server, and database logs1
Identify the breaking point or saturation level1
Monitor resource utilization across components1
Document observations, anomalies, and trends1
Recommend optimization or tuning actions1
Prepare and share summary report with key findings and graphs1

Internal Documentation — Microtec