dare.co.uk — Sunday traffic recompare
Baseline: 2026-05-06 (Wed, dashboard snapshot) · Today: 2026-05-10 (Sun, GraphQL fetch covering UTC 2026-05-09) · Window: 24h each
TL;DR
- Volume up +66% vs baseline: 8,937 → 14,820 in 24h. Two-thirds of the increase is bot pressure — real-content delivery (200s) only up modestly: 4,086 → 4,960 (+21%).
- The improving delta is plot-able. 404 share down 13 points (39.0% → 25.7%); 301 share up (9.9% → 16.2%) — redirects keep absorbing legacy bot probes. Wed → Sat is a clean before/after arc.
- Cache HIT held steady in a tight 43–47% band. Saturday 43.3% — slight dip from Friday's 46.6%, well within day-to-day variance.
- Threat volume up 6× (318 → 2,183, 3.6% → 14.7%). Bot Fight Mode firing harder. Worth a watch, not yet alarming.
- Sat vs Fri is small drift — following a similar path with no surprises. The meaningful narrative is Wed → Sat.
Trajectory — four days side by side
| Metric | Wed 05-06 | Thu 05-07 | Fri 05-08 | Sat 05-09 | Wed → Sat Δ |
|---|---|---|---|---|---|
| Total requests / 24h | 8,937 | 11,230 | 14,212 | 14,820 | +65.8% |
| Page views | 2,076 | 2,165 | 2,941 | 2,906 | +40.0% |
| Cached requests | 3,778 | 5,046 | 6,621 | 6,423 | +70.0% |
| Cache HIT % | 42.3% | 44.9% | 46.6% | 43.3% | +1.0 pt |
| Threats | 318 | 1,222 | 1,410 | 2,183 | +586% |
| Threat % | 3.6% | 10.9% | 9.9% | 14.7% | +11.1 pts |
| Real content (200) | 4,086 | 3,175 | 4,776 | 4,960 | +21.4% |
| Uniques (approx) | 1,876 | 1,577 | 1,701 | 1,951 | +4.0% |
Status-code mix — evolution across the week
| Code | Wed | Thu | Fri | Sat | Read |
|---|---|---|---|---|---|
| 200 | 4,086 (45.7%) | 3,175 (28.3%) | 4,776 (33.6%) | 4,960 (33.5%) | Real content. Slight absolute growth; share down because total volume up faster. |
| 301 | 889 (9.9%) | 2,867 (25.5%) | 1,328 (9.3%) | 2,405 (16.2%) | Redirect batch still absorbing legacy SureCart / WP-taxonomy probes. Healthy. |
| 302 | 0 | 56 (0.5%) | 1,573 (11.1%) | 1,094 (7.4%) | Newer 302s introduced mid-week. Stable now. |
| 403 | 322 (3.6%) | 1,225 (10.9%) | 1,423 (10.0%) | 2,184 (14.7%) | Bot Fight Mode tarpit. Escalating with bot pressure. |
| 404 | 3,486 (39.0%) | 3,429 (30.5%) | 4,868 (34.3%) | 3,812 (25.7%) | Down 13 points since baseline. Real broken-link reduction. |
| 405 | 28 (0.3%) | 321 (2.9%) | 58 (0.4%) | 58 (0.4%) | Method-not-allowed bot probes — settled at low level after Thu spike. |
| 530 | 3 (0.0%) | 0 | 6 (0.0%) | 76 (0.5%) | Origin sub-error — 25× jump Saturday. Worker hiccup, see Watch items. |
Visual
What this tells us
The recent fixes are doing exactly what they were aimed at, and the trajectory is a clean four-day arc you could put in a deck:
- 404 share down 13 points (39% → 25.7%). The redirect batch is still absorbing the legacy URL probes Google was reading as “broken site”.
- 301s at 16.2% of all traffic — twice the baseline share. Each is an edge-computed response: fast, free, no origin work, and the right Google signal (“we know where it moved”).
- Real-content delivery is healthy. 4,960 200s on Saturday vs 4,086 on Wednesday (+21%). Page views up similarly. The HSTS + cache-control + sitemap + redirect work didn't break anything visible to real visitors.
- Cache HIT held in a tight band. 43–47% across all four days. No regression from the deploy churn. Saturday's 43.3% is well within the noise floor.
The framing for Friday's recompare (“composition has shifted significantly”) still applies — but as of Sunday the shift has settled into a stable new shape, not a moving target.
Watch items
- Threat volume +586% absolute (318 → 2,183). Three plausible causes, all benign for a static site:
- Genuine increase in script-kiddie traffic (this week's deploys made the site look “active” again)
- Bot Fight Mode tightening its model after seeing the activity
- Both
- 530 errors jumped 25× Saturday — closed incident, not deploy-window noise. (See addendum for the data trail.) 91% of the day's 530s landed in just two 5-minute buckets: 22:25 UTC (34 errors during a 130 req/s burst with 363 simultaneous WAF blocks — Worker briefly overwhelmed) and 23:20 UTC (35 errors in 61 reqs, 57% error rate — a focused failure, not a volume one). The
dare-co-ukWorker was modified at 23:32:54 UTC — 7 minutes after burst #2, almost certainly your fix. Zero 530s in all subsequent buckets through midnight, and zero on Sunday so far. Treat as closed; watch next Friday's recompare for any recurrence. - 302s landed and stuck. 0 (Wed) → 1,094 (Sat). New 302-redirects from mid-week edge-rules work. Stable, working as intended; flagging so we don't forget they exist.
Recommendations
- Yes, plot the delta. Wed → Sat is a presentable arc — drop the trajectory table into the case-study folder. With Friday 2026-05-15's recompare it'll be a full-week before/after.
- No action on traffic shape. Following a similar path to Friday — Saturday is small drift, no surprise.
Check Worker logs for Saturday's 530 spike.Done — see addendum. Closed incident; no follow-up needed unless 530s reappear next week.- Recompare next Friday (2026-05-15) to establish a post-stabilisation baseline.
Methodology note
- Baseline (10,360 / 80.14% / 14.81% / 5.05%) comes from CF dashboard's “Traffic overview” tier, which categorises differently from GraphQL — see the Friday recompare's methodology section for the full caveat.
- All four days in the trajectory table are GraphQL-fetched (
~/Downloads/dare_analytics_cache/*.json), so they're directly comparable to each other. The only mixed-source comparison is the dashboard-baseline 10,360 vs GraphQL 14,820 — those don't quite map 1:1, but the directional move (volume up, real-content stable, threats up) is robust either way. - Snapshot files use fetch date as filename; the data inside is for the prior UTC day. So
2026-05-10.json(fetched today) describes Saturday 2026-05-09's traffic.
Addendum — 530 spike verified (2026-05-10)
The original watch-item hypothesis (“almost certainly deploy-window noise from today's launchd-cron unblock + manual deploy testing”) was wrong on two counts. Documenting the falsification trail because the reasoning matters.
What I checked: five-minute-granularity GraphQL on httpRequestsAdaptiveGroups for the 21:30–01:00 UTC window around the spike, plus the dare-co-uk Worker's modified_on timestamp.
What the data showed:
| 5-min bucket UTC | 530 | Total | Notable |
|---|---|---|---|
| 22:25 (23:25 BST) | 34 | 641 | 363 × 403s — sustained 130 req/s, 100× the day's average. |
| 23:20 (00:20 BST Sun) | 35 | 61 | 57% error rate — focused failure, low volume. |
| (rest of day) | 7 | full day | Flat background. |
Where the original hypothesis went wrong:
- Timeline. My launchd-cron unblock + manual deploys happened ~07:30 BST Sunday morning, not late Saturday. The 530 bursts were 7+ hours earlier.
- Cause. The signature isn't deploy-window noise (which would look like a few 530s spread around a single short window with no other anomaly). It's two distinct events: an attack-burst overwhelm at 22:25 (Worker hit CPU/concurrency limits during a sustained 130 req/s spike that the WAF was already blocking 363 of), and a focused failure at 23:20 (35 530s in 61 reqs is a broken endpoint signature, not volume overload).
What probably happened: botnet hit dare.co.uk hard at 22:20–22:30 UTC, WAF caught most of it (363 of 641), Worker briefly couldn't keep up with the rest (34 × 530s). A second smaller burst at 23:20 hit a specific Worker path that was breaking. You likely investigated and shipped a fix — dare-co-uk Worker modified_on = 2026-05-09T23:32:54Z, exactly 7 minutes after the second burst. Zero 530s in any bucket after that, and zero today.
Lesson banked: when a watch-item ends with “probably X”, that's a testable hypothesis. Five minutes of higher-resolution data is cheaper than publishing a wrong leading explanation.
Generated 2026-05-10 from 2026-05-10.json (data window: UTC 2026-05-09). Addendum verified against 5-min GraphQL + Worker metadata.