QRSync qrsync
Dashboard

QR Code Analytics: What to Track, What to Ignore

The first thing every QR analytics dashboard shows is “total scans.” It’s the wrong number to optimize for. A poster in Times Square will get more scans than a sign in a quiet restaurant, but that doesn’t tell you anything about whether either campaign is working.

Real value from QR analytics comes from comparing scans against something — a goal, a baseline, another campaign, or your own previous performance. This post is about which numbers are worth comparing, which ones are noise, and how to set up dynamic QR codes so the data is actually useful.

What you can measure (and what you can’t)

Dynamic QR codes generate one scan event per scan. Each event captures:

  • Timestamp — when the scan happened.
  • Device family — iOS, Android, or other.
  • Browser — Safari, Chrome, etc.
  • Approximate country and (often) city — from IP geolocation, tier-gated.
  • The short code that was scanned — useful when you have multiple codes for the same campaign.

What QR analytics can’t tell you:

  • Who scanned (no personal identifier, no cookies cross-tracked).
  • What they did after the scan — that’s the job of your destination page’s analytics.
  • Whether the same person scanned twice — every scan is its own event.

QR analytics, in other words, is about the moment of scan: who’s looking, when, from where, on what device. The post-scan story — did they buy, read, sign up — happens in your website analytics (Google Analytics, Plausible, Fathom, whatever you use) on the destination side.

The right setup is to think of these two layers as complementary, not redundant.

The four metrics worth tracking

1. Scan velocity, not just total scans

“42 scans this month” tells you nothing. “42 scans this month, up from 28 last month” tells you the campaign is gaining momentum. “42 scans this month, with 38 in the first week” tells you the print materials hit a peak and decayed.

Track scan velocity in weekly buckets:

  • Week-over-week growth — flat or rising means the campaign is healthy. Falling means something’s changing (campaign fatigue, signage damage, weather).
  • Time to first scan — how long after deployment before scans started arriving. Faster is better for time-sensitive campaigns; slower might indicate signage placement is being missed.
  • Steady-state weekly rate — what’s the baseline once the initial spike settles? This is your “normal” scan volume.

2. Hour-of-day and day-of-week patterns

These reveal when your audience actually engages. A QR code on a coffee shop window might peak at 8–10am on weekdays. A QR code on a dinner menu peaks at 6–9pm. A QR code on a billboard near a sports stadium spikes during game days.

What to do with this:

  • Confirm the campaign matches the placement. A “lunch special” QR with zero noon scans means the placement is wrong or the offer isn’t compelling.
  • Plan content updates. Update the QR’s destination just before peak scan hours, not in the middle of them.
  • Stagger A/B tests. If you swap destinations at peak time, the noise from changeover will distort results.

3. Device split (iOS vs. Android)

Useful primarily for technical QA, not segmentation:

  • Different from your demographic baseline? If your customer base skews iOS but scans are 60% Android, your iOS users may be finding another path (existing website, app). Investigate.
  • Sudden shift? A jump in Android share might mean an iPhone-specific bug on your destination page that’s causing fall-off.
  • Use it to test destination pages. When you deploy a new menu version, watch both shares. If iOS scans stay flat but Android crashes, you know where to debug.

Don’t try to demographic-segment on this; modern phones are too diverse for “iOS user” to mean much.

4. Geographic distribution (Pro tier and up)

Country and (rough) city data matters when you’re running placement-specific campaigns:

  • Validate placement reach. A QR code in a tourist district should show diverse country origins. A neighborhood spot should be mostly local.
  • Identify unexpected sources. Scans from a country where you don’t operate often mean someone shared a photo of the QR online — could be good (organic spread) or bad (spam aggregator).
  • Run regional A/B tests. Place identical campaigns in different cities, compare scan rates against population density and ad spend.

This is where dynamic QR codes earn their keep over web analytics alone — you’re tracking the physical placement’s performance, not just the web destination’s.

What’s mostly noise

A few metrics that look interesting but rarely change decisions:

Browser family. Safari, Chrome, Firefox shares mostly just reflect device family. Track if you’re debugging a browser-specific rendering issue; otherwise ignore.

Operating system version. Almost never actionable. The only useful version data is “is anyone scanning from genuinely ancient devices that might struggle with my destination page” — but you’d catch that from device family + a bug report.

Exact city. IP-based city geolocation is noisy in dense areas (often locating a scanner to the next neighborhood over). Use country and region directionally; treat city as approximate.

Average scans per code. Useless when you have a mix of high-traffic and low-traffic placements. Compare codes within their category instead.

How to set up codes so the data is useful

A few habits that pay back massively:

1. One dynamic QR code per distinct placement. If you put the “same” campaign on a window decal and a flyer at the register, use two different QR codes. Otherwise the scan data is muddled. Both can point to the same destination — that’s fine — but the codes are separate so you can compare which placement performs.

2. Naming codes for your future self. A code named “Untitled QR 3” is useless six months later. Use names like “Spring 2026 menu - window decal” or “Summer giveaway - flyer batch 2.” QRSync’s dashboard lets you rename codes at any time.

3. Annotate when you change the destination. If you swap the QR destination mid-campaign and don’t note it, your future-self will see a mysterious scan-volume cliff with no explanation. Keep a simple note (in QRSync’s code description field, or in a spreadsheet) of what changed and when.

4. Set a budget for testing. QR placement testing has a sweet spot: enough scans to be statistically meaningful (usually 50–100 minimum per variant), small enough that you’re not committing to a multi-year deployment. Free tier (50 scans/month per code) is fine for short tests; Pro tier (10,000/month) covers most full campaigns.

A simple analytics review cadence

For most small businesses running 3–10 QR codes, a 15-minute review every two weeks is plenty:

  1. Total scans per code, last 14 days vs. previous 14 days. Anything down >30% gets a quick investigation.
  2. Day-of-week pattern still matches expectations. A weekday peak that moves to weekends might indicate audience shift.
  3. Any zero-scan codes? Either the placement is broken (sign fell, code damaged) or the campaign isn’t reaching anyone — both are worth knowing.
  4. Any unexpected geographic spikes? Often a sign of social sharing or scrapers; worth understanding either way.

If a code is consistently underperforming after two review cycles, either kill it or move the placement. Don’t let underperforming codes drag down your sense of what’s working.

The honest summary

QR analytics is best at one thing: telling you whether a physical placement is generating scan engagement, and how that’s trending. It’s worst at: telling you who scanned, what they did next, or whether they bought anything. Those answers live in your website analytics.

When you treat QR analytics as a placement performance signal — answering “is this sign / poster / flyer / tent card working?” — it’s incredibly valuable. When you treat it as a complete picture of campaign performance, you’ll be disappointed.

Start tracking your QR codes — sign up free, create your first dynamic code, and watch what comes in. The first week of real scan data usually tells you more than any guide ever could.