How we score products
Our crowd scores are generated entirely from real social data — no editorial opinion, no advertiser influence. Here's exactly how they work.
Data collection
We collect YouTube review videos and Amazon customer reviews for every product we cover. Videos are matched to products by name; reviews are matched by ASIN. We prioritise long-form content — full video reviews and verified purchase reviews with substantive text — over one-liners.
Sentiment analysis
Each mention is scored by GPT-4o-mini, which reads the full text and returns a sentiment score with reasoning. This catches nuance — sarcasm, conditional praise, age-specific caveats — that simpler rule-based approaches miss. Scores run from –1 (strongly negative) to +1 (strongly positive).
Aggregation
Raw sentiment scores (–1 to +1) are normalised to a 0–100 scale. We weight mentions by engagement — a YouTube comment with 500 likes counts for more than one with 0; an Amazon review marked helpful by 10 buyers carries more weight than one with no votes.
Platform breakdown
We report the overall crowd score alongside a per-platform breakdown. If a product scores well with YouTube reviewers but has mixed Amazon reviews, that context matters — and we show it.
Data freshness
YouTube data refreshes weekly. Amazon reviews are updated periodically as new verified purchases are collected. Sentiment scores are cached between updates. Product pages show when the score was last calculated.
Score thresholds
- 70–100Strongly recommended by the crowd
- 45–69Mixed opinions — read the quotes
- 0–44Mostly negative crowd sentiment