All posts

QA Report Template: How to Document a Visual Testing Session

Every QA session produces findings. What teams do with those findings — how they document, communicate, and track them — is the difference between bugs that get fixed and bugs that get shipped.

This guide gives you a practical QA report template for visual testing sessions, explains what each section should contain, and shows how to generate a shareable PDF report that engineers and clients can actually use.


What a QA Report Should Include

A QA report is a structured record of what was tested, what was found, and what the recommendation is before release. For visual testing specifically, it needs to capture:

  1. Session metadata — what release or build was tested, who ran the test, when, and on what platform
  2. Scope — which screens, flows, or components were reviewed
  3. Findings — a list of issues found, each with a screenshot, description, and severity
  4. Pass/fail status — which areas passed review and which have open issues
  5. Recommendation — ship, ship with known issues, or block

Without all five sections, the report doesn't function as a handoff document. A list of screenshots without context isn't a report — it's a folder.


Visual QA Report Template

Here's a complete template you can use directly or adapt for your team.


Visual QA Report

Project: [Project or product name] Build / Version: [Build number, branch, or release tag] Environment: [Staging / Production / Preview URL] Tester: [Your name] Date: [Date of testing session] Platform tested: [macOS / Windows / iOS / Android — include browser + version for web]


Scope

List exactly what was reviewed in this session. Be specific — "the whole app" is not useful. Good examples:

  • Checkout flow (cart → shipping → confirmation)
  • Dark mode across all five primary screens
  • Mobile viewport (375px) for the onboarding sequence
  • New pricing page — layout, typography, and CTA button states

Summary

| | Count | |---|---| | Screens reviewed | — | | Issues found | — | | Critical | — | | Major | — | | Minor | — | | Passed without issues | — |


Findings

For each issue found, include:

Issue #1

  • Screen: [Which screen or component]
  • Severity: Critical / Major / Minor
  • Description: [What's wrong — be specific about what was expected vs. what was found]
  • Screenshot: [Annotated screenshot with pin at exact location]
  • Viewport / Platform: [Where this was observed]
  • Ticket: [Link to Jira or ClickUp ticket if created]

Repeat for each issue. Number them sequentially.


Passed Screens

List screens that were reviewed and passed without issues. This is as important as the findings — it tells stakeholders which areas were verified and gives confidence in what is ready to ship.


Recommendation

One of three outcomes:

  • Ship — no blocking issues found, all reviewed areas pass
  • Ship with known issues — minor issues noted but none are release blockers; linked tickets are non-critical
  • Block — critical or major issues found that must be resolved before release

What Makes a Good Visual Bug Report

The findings section is where most QA reports fall apart. "Button looks off" is not a useful bug report. "The primary CTA button on the pricing page has 12px left padding instead of 16px at 375px viewport — see pin #1 in screenshot" is.

Every finding needs four things:

1. An annotated screenshot. The engineer fixing the bug should be able to open your report and immediately see exactly what you're pointing at. A numbered pin dropped at the precise location — not a full-page screenshot with a vague arrow — saves a full back-and-forth cycle.

2. Expected vs. actual. "The header background should be #1A1A2E in dark mode. It's currently inheriting #FFFFFF from the light theme." One sentence. Clear and actionable.

3. Viewport and platform. Visual bugs are often environment-specific. "Observed on Windows Chrome 121, 1440×900 viewport" tells the engineer exactly where to reproduce it.

4. Severity. Use a consistent three-level system: Critical (blocks release), Major (must fix before launch), Minor (fix in next cycle). Don't overuse Critical — it dilutes the signal.


How to Generate the PDF Report

The bottleneck in visual QA reporting isn't finding bugs — it's assembling the report. Manually copying screenshots into a document, adding annotations in PowerPoint, writing descriptions, then exporting as PDF: this takes 30–60 minutes on a typical session, and most teams skip it entirely as a result.

Captur generates this report for you.

Here's the workflow:

  1. Capture screenshots as you review — they land in your project Space automatically
  2. Annotate each screenshot with numbered pins and comments at the exact issue location
  3. Group the screenshots into a named Visual Review — this becomes your report session
  4. Export as a PDF report — annotated screenshots, comments, and session metadata export in one click

The output is a structured PDF you can email to a client, attach to a release ticket, or add to a Confluence page. The formatting is handled — you spend time on the QA, not the document.


When to Send a QA Report

Not every testing session needs a formal PDF report. Here's a practical guide:

Always send a report:

  • Pre-release review on any client-facing product
  • Any build going to a client for sign-off
  • Major feature releases on products with design specifications
  • Regression testing sessions before a significant version bump

A Jira/ClickUp ticket per bug is sufficient:

  • Routine sprint testing with no external stakeholders
  • Internal builds between team members
  • Quick smoke tests on minor releases

No formal documentation needed:

  • Exploratory testing during active development
  • Quick "does this look right" checks between designer and developer

The rule of thumb: if someone outside your immediate team needs to understand what was tested and what was found, send a report.


QA Report Checklist

Before sending:

  • [ ] Build / version number is recorded
  • [ ] Scope is clearly defined — no ambiguity about what was and wasn't reviewed
  • [ ] Every finding has an annotated screenshot
  • [ ] Every finding has expected vs. actual description
  • [ ] Viewport and platform is noted for each finding
  • [ ] Severity is assigned consistently
  • [ ] Passed screens are listed (not just findings)
  • [ ] Recommendation (ship / ship with issues / block) is explicit
  • [ ] All findings are logged in Jira or ClickUp with screenshots attached

Building the Reporting Habit

The teams that consistently ship polished UIs aren't doing more QA work than everyone else — they have a faster reporting workflow, so the overhead doesn't become a reason to skip the process.

A QA report that takes 5 minutes to generate gets done before every release. One that takes an hour gets skipped.

Captur is built around this: capture, annotate, and export a shareable PDF report from a single interface, without assembling a document manually.

Join the waitlist →