The QA Audit.
Published in Public.

The 10-point quality engineering framework we apply to every consumer product we evaluate — and the same framework we run as a paid Pre-Launch Audit for small brands shipping into Amazon, retail, and DTC.

20+ Years QA Experience
Federal Recall Data Pipeline
AI-Accelerated Research

Most consumer product reviews are guesswork dressed up as opinion. Most pre-launch quality reviews cost twenty to fifty thousand dollars and take six weeks. Neither of those works for the actual problem.

The QA Audit is a published, repeatable framework that closes the gap. AI handles the data labor — pulling spec sheets, summarizing one-star review patterns, cross-referencing federal recall databases, checking manufacturer track records. A senior quality engineer with twenty years of experience across medical devices, aerospace, and consumer goods provides the judgment.

It's the same methodology applied to every consumer product we write about on this site, and the same methodology we deliver as a written report when a small brand pays us to run it on a product they're about to ship. Publishing it here means you can see exactly what the work looks like before you decide whether you want it done on your own product.

The 10 Questions We Ask of Every Product

Not every product needs every question answered in depth. But every product we evaluate gets every question asked — and the answers are what make a verdict more than an opinion.

1

Materials & Construction

What is it actually made of? Does the bill of materials match what's claimed in the listing? Are the materials appropriate for the product's intended use, lifespan, and operating environment?

2

Tolerances & Specifications

Where the product makes a measurable claim — power output, capacity, accuracy, weight rating — do the published specs hold up against documented user reports and category norms?

3

Failure Modes

What is most likely to break, and what happens when it does? A product that fails safely is fundamentally different from one that fails dangerously, even at the same break rate.

4

Manufacturer Track Record

What does federal recall and complaint data say about this manufacturer's other products? Patterns across a brand are usually more predictive than any single product's reviews.

5

Category Recall History

How often does this product category get recalled? Which failure modes show up over and over? CPSC, FDA, USDA, and NHTSA databases tell that story across years of data.

6

Regulatory Exposure

What compliance requirements does this product trigger? FDA registration, CPSC certification, FCC, UL, age-grading, labeling rules — what's table-stakes, what's optional, what's missing.

7

One-Star Review Patterns

Negative reviews contain more useful signal than positive ones. We read them at scale, group them by failure type, and look for patterns that contradict the listing or the manufacturer's claims.

8

Returns & Warranty Reality

What does the warranty actually cover? What does the return policy actually allow? The gap between policy and practice is where consumers — and small brands — get hurt.

9

Comparable Products

How does this product compare to two or three direct competitors on materials, specs, failure modes, and price? Context turns a verdict from an opinion into a recommendation.

10

Risk-Ranked Verdict

Findings are sorted into high, medium, and low risk — with specific evidence, recommended actions, and a final yes / no / with-caveats verdict that someone can actually act on.

What AI Does. What a Human Decides.

The honest version. Reviewers and clients deserve to know which parts are pattern-matching at scale and which parts are senior judgment.

⚙️ AI does the homework

Research & Aggregation

  • Pulling and summarizing one-star reviews across multiple retailers
  • Reading spec sheets, manuals, and warranty fine print
  • Cross-referencing federal recall databases (CPSC, FDA, USDA, NHTSA)
  • Identifying patterns across a manufacturer's product catalog
  • Drafting findings, structuring outlines, catching factual errors
  • Flagging missing information that would change the verdict
🧭 Human writes the verdict

Judgment & Recommendation

  • Deciding what's actually worth your money
  • Ranking findings by real-world risk vs. theoretical risk
  • Translating regulatory exposure into plain-English consequences
  • Calling out when claimed specs would not pass real QA review
  • Writing the final yes / no / with-caveats recommendation
  • Owning the verdict on the byline. No fictional staff names.

The Federal Databases We Run Daily

Recall and complaint data from four U.S. federal agencies, normalized and cross-referenced through pipelines we built and operate ourselves. The same pipelines that power the RecallSentry™ app.

CPSC
Consumer Product Safety Commission
Toys, electronics, household goods, appliances, child safety equipment
FDA
Food & Drug Administration
Food, drugs, cosmetics, OTC health products
USDA
U.S. Department of Agriculture
Meat, poultry, eggs, processed foods regulated by FSIS
NHTSA
Nat'l Highway Traffic Safety Admin.
Vehicles, tires, child seats, vehicle equipment, recalls and amendments

What This Methodology Does NOT Cover

If we won't see it in the documents we review, we'll say so. Knowing the limits is what makes the verdict honest.

The QA Audit does not include:

  • Physical product testing. We review documentation, specifications, and field data — we do not break units on a test bench. Where physical testing would change a verdict, we say so explicitly.
  • In-person supplier or factory audits. Manufacturer track record is assessed through public regulatory data, not on-site visits.
  • Regulatory filings or submissions. We flag exposure and missing requirements; we do not handle FDA, CPSC, or other agency submissions on a client's behalf.
  • Litigation support or expert witness work. Available separately. Not part of the standard audit.
  • Anything we can't see in the documents reviewed. When information is missing, we list what's missing — we do not fill the gap with assumptions.

A QA Audit is a senior judgment layered onto well-organized data. It is not a substitute for product certification, formal regulatory clearance, or destructive testing where those are required by law or category norms.

The Engineer Behind the Methodology

One person owns the verdict on every audit. No fictional staff, no rotating bylines.

👤
Mark Mayeux
Quality Engineer · QESaaS

Twenty-plus years in product quality engineering across three categories where the cost of getting it wrong varies from ruined-your-Tuesday to ruined-your-life: medical devices, aerospace, and consumer goods. Same instinct, different stakes.

Earlier roles include performing supplier quality audits across medical consumables, medical devices, and consumer goods — and leading teams through years of ISO 9001, ISO 14001, and OHSAS 18001 certification audits, authoring company process documents and work instructions across every department. Mark also built the entire quality department at a medical equipment startup from scratch — hiring, agency documentation, supplier quality requirements, and supplier scoreboards — which was later acquired by a larger company.

Also the builder behind RecallSentry™, a published iOS and Android app that monitors federal recall feeds across CPSC, FDA, USDA, and NHTSA — built using AI as the labor layer. The same daily-running data pipelines that power that app feed the QA Audit's manufacturer and category data.

Quality & Six Sigma Regulatory Affairs Supplier Quality Auditing ISO 9001 / 14001 OHSAS 18001 Medical Devices Aerospace Automotive Consumer Products Med Equipment Startup → Acquisition AI-Augmented Engineering

Want This Done on a Product You're Launching?

The Pre-Launch QA Audit applies this exact methodology to your product before it ships. Fixed price. One-week turnaround.