Bot detection through timing regularity, account takeover via behavioral drift, carding identification through form timing, and credential stuffing defense using login page behavior patterns.
Traditional fraud detection relies on rules (IP blocklists, rate limiting, CAPTCHA) or device signatureing. These approaches are easily circumvented by sophisticated attackers using residential proxies, headless browsers with signature spoofing, and CAPTCHA-solving services. Behavioral biometrics offers a fundamentally different approach: instead of identifying the device, identify the human (or non-human) operating it. This whitepaper details ClickStream's four fraud detection domains — bot detection, account takeover, carding, and credential stuffing — and the specific behavioral signals that expose each attack pattern. Every signal is computed at the edge in real-time as part of ClickStream's 26-model behavioral scoring pipeline.
In the Signals tab of your ClickStream dashboard at einstein.clickstream.com, every visitor receives a real-time Bot/Fraud Detection Score. Suspicious visitors are flagged automatically — no rules to configure, no thresholds to set. The platform detects bots, credential stuffing, carding attacks, and account takeover attempts using behavioral biometrics that attackers can't easily spoof. This whitepaper explains the detection engine behind those scores.
Humans interact with websites in fundamentally different ways than automated scripts. The differences are measurable, consistent, and extremely difficult to fake:
The core insight of behavioral biometrics is that behavior is harder to fake than identity. An attacker can spoof a device signature, rotate IP addresses, and solve CAPTCHAs. But faithfully reproducing the micro-behavioral patterns of a specific human in real-time is computationally intractable.
ClickStream's Bot Probability model (Model #26 in the behavioral scoring pipeline) evaluates five primary bot indicators:
The strongest bot signal is timing regularity. Humans have variable inter-action delays that follow a log-normal distribution. Bots operate on fixed intervals or use simple random delays that produce a uniform distribution.
| Signal | Human Pattern | Bot Pattern | Detection Method |
|---|---|---|---|
| Inter-click timing | Log-normal: 200ms–8s with long tail | Constant (e.g., 500ms) or uniform random | Coefficient of variation < 0.3 = bot signal |
| Inter-page timing | Highly variable: 5s–300s | Constant or narrow range | Standard deviation < 2s = bot signal |
| Scroll speed | Variable with pauses | Constant velocity | Scroll velocity variance < 0.1 = bot signal |
| Keystroke timing | Variable 50ms–400ms inter-key | Constant or zero (paste) | Keystroke variance < 10ms = bot signal |
Headless browsers (Puppeteer, Playwright, Selenium) render pages but often skip generating mouse, scroll, and keyboard events. The absence of these events is a strong bot indicator:
Bots and scrapers exhibit distinct navigation patterns compared to humans:
| Pattern | Human | Bot |
|---|---|---|
| Page visit order | Selective, interest-driven | Systematic (all pages, alphabetical, or sitemap order) |
| Session depth | 2–8 pages typical | Hundreds or thousands |
| Time per page | Variable, content-dependent | Constant or near-zero |
| Referrer pattern | Organic entry points | Direct to deep pages |
| Resource loading | All resources (CSS, images, JS) | Often skip non-essential resources |
Account takeover (ATO) occurs when an attacker gains access to a legitimate user's account. Traditional detection relies on IP geolocation and device signatureing, both easily spoofed. Behavioral biometrics detects ATO through behavioral drift — the deviation between the current session's behavioral pattern and the account holder's established baseline.
ClickStream builds a behavioral baseline for each identity cluster over time. The baseline includes:
| Signal | Why It Matters | Weight |
|---|---|---|
| Immediate navigation to account settings | Attackers change password/email first to lock out owner | High |
| Payment method change within first session | Attackers add their payment or extract stored payment info | High |
| Shipping address change + immediate purchase | Classic ATO monetization pattern | Very High |
| Different device type than baseline | Attacker rarely uses same device as victim | Medium |
| Mouse dynamics mismatch | Different person = different motor patterns | High |
| Typing cadence mismatch | Typing is as unique as handwriting | High |
Carding is the process of testing stolen credit card numbers on e-commerce sites. Carders exhibit distinctive behavioral patterns that differ sharply from legitimate customers:
Legitimate customers type their payment information from memory (slowly, with corrections) or auto-fill from their browser. Carders paste card numbers from a list or type them with copy-paste patterns:
| Metric | Legitimate Customer | Carder |
|---|---|---|
| Card number entry time | 4–15 seconds (typing from memory) | < 1 second (paste) or 2–3s (practiced) |
| CVV entry time | 2–6 seconds (find card, flip it over) | < 0.5 seconds (from same list as card number) |
| Expiry date entry | 1–3 seconds | < 0.5 seconds |
| Total checkout form time | 30–120 seconds | < 10 seconds |
| Name entry | 2–5 seconds (auto-fill or known) | < 1 second (paste) |
| Address entry | 10–30 seconds | < 3 seconds (paste) |
The total time from product page to checkout completion is a powerful signal. Carders do not browse; they go directly to the cheapest product (often a gift card or digital item) and proceed to checkout as fast as possible:
Credential stuffing uses breached username/password pairs to attempt logins on other sites (exploiting password reuse). The attack uses automated tools but must interact with login forms.
| Signal | Legitimate User | Credential Stuffing |
|---|---|---|
| Login page dwell time | 5–30 seconds | < 2 seconds or exactly constant |
| Field focus sequence | Tab between fields, possible corrections | Direct field fills, no navigation |
| Password field timing | 2–10 seconds (recall/typing) | < 0.5 seconds (paste) |
| Mouse to submit button | Curved approach with deceleration | Instant click or no mouse movement |
| Failed login response | Re-read error, try again slowly | Instant retry with different credentials |
| Sessions per IP | 1–3 per day | Hundreds per hour |
If a credential stuffing attack succeeds (the credentials were valid), the attacker's post-login behavior differs from the legitimate account holder. ClickStream's account takeover detection layer then activates.
All four fraud detection domains feed into a unified fraud score that is stored alongside the other 15 behavioral scores:
| Score Range | Classification | Action |
|---|---|---|
| 0–20 | Legitimate | No action |
| 21–40 | Low risk | Enhanced monitoring |
| 41–60 | Moderate risk | Step-up authentication (CAPTCHA, email verification) |
| 61–80 | High risk | Block transaction, require manual review |
| 81–100 | Very high risk (likely fraud) | Block and alert, rate limit IP |
ClickStream can trigger automated responses based on fraud scores via webhooks or native integrations:
The most critical challenge in fraud detection is minimizing false positives — blocking legitimate users. ClickStream mitigates this through several mechanisms:
Behavioral biometrics represents the next frontier in fraud detection. By analyzing the micro-patterns of human interaction — mouse dynamics, typing cadence, scroll behavior, navigation rhythm, and form timing — ClickStream detects bots, account takeover, carding, and credential stuffing attacks that bypass traditional device signatureing, IP-based rules, and CAPTCHA challenges.
The key advantage is that behavioral signals are computed passively, without user friction. There is no CAPTCHA to solve, no extra authentication step, and no visible detection mechanism. The scoring happens at the edge in under 3ms, inline with every other behavioral model in ClickStream's pipeline. Legitimate users never know they are being evaluated. Only the fraudulent actors experience intervention.
As automated attack tools become more sophisticated — using residential proxies, signature spoofing, and AI-generated behavioral mimicry — behavioral biometrics will become increasingly essential. The fundamental asymmetry remains: faking identity is easy, but faking human behavior at the micro-level is computationally intractable.
Behavioral biometrics detect fraudulent clicks in real time — so every ad dollar reaches a real potential customer. Protect your ROAS at the edge.
GET EARLY ACCESS