Bug Bounty Programs: How Companies Pay You to Hack Them
Bug bounty programs are structured arrangements where companies invite independent security researchers to find vulnerabilities in their products and services, then pay a reward for valid, responsibly disclosed findings. The company gets security research at scale without hiring a full internal team. The researcher gets paid for work they were going to do anyway.
The model has matured significantly over the past decade. What started as informal "hall of fame" acknowledgements has become a multi-billion dollar industry with professional researchers earning six-figure incomes, enterprise programs with million-dollar payouts, and platforms that handle the logistics between companies and researchers.
How bug bounty programs work
The basic loop is simple: a researcher finds a vulnerability, reports it to the company through the program's defined process, the company verifies it and determines severity, and then pays a bounty based on the agreed payout structure.
In practice there are more steps. A well-run program looks like this:
- Program launch: The company defines scope (what can be tested), rules of engagement (what methods are allowed), and payout tiers (what severity gets what reward). This is published on a bug bounty platform or the company's own security page.
- Research: Researchers explore the defined scope looking for vulnerabilities. This might be web application testing, API fuzzing, mobile app analysis, or source code review if the program is public.
- Report submission: The researcher submits a detailed report including the vulnerability type, steps to reproduce, proof-of-concept (screenshots, videos, or code), and impact assessment.
- Triage: The company's security team (or the platform's triage team) reviews the report, attempts to reproduce the vulnerability, and classifies it by severity.
- Resolution: If valid, the company pays the bounty and works to fix the vulnerability. The researcher is notified when the fix is deployed.
- Disclosure: After a set time period (usually 90 days), the researcher may publish the vulnerability details publicly. Some programs allow disclosure sooner if both parties agree.
The whole process can take anywhere from a few days (for critical vulnerabilities where companies move fast) to several months (for complex issues or slow-moving organizations).
The main platforms
HackerOne: The largest bug bounty platform by volume. Hosts programs for major companies including Google, Apple, Microsoft, Uber, and hundreds more. Has a reputation system (signal, impact score) that affects which programs invite you. Running a program through HackerOne costs the company money (platform fees on top of bounties), which means the programs there tend to be from organizations that have committed budget to security research.
Bugcrowd: HackerOne's main competitor. Similar model, different program roster. Some researchers prefer one platform's UI or community over the other. Running accounts on both is common.
Intigriti: European-focused platform. Strong roster of European companies and increasingly global programs. GDPR-compliant, which matters for certain types of research.
Synack: A curated, invitation-only platform that vets researchers before allowing access to programs. The targets are often more sensitive (government agencies, financial institutions). Payout structure is different - more structured than pure bounty-per-bug.
YesWeHack: Another European platform with a growing global presence. Popular with French and German companies in particular.
Self-hosted programs: Some large companies run their own programs without a third-party platform. Google's Vulnerability Reward Program (VRP), Microsoft's MSRC, and Apple's Security Research program are notable examples. The payouts can be larger (Google has paid over $12 million annually), but there is no platform to mediate disputes.
graph TD
subgraph PREP["Preparation Phase"]
A1["Choose platform
(HackerOne / Bugcrowd / Intigriti)"]
A2["Select target program"]
A3["Read scope document
carefully"]
A4["Identify in-scope assets
domains, APIs, apps"]
A1 --> A2 --> A3 --> A4
end
subgraph HUNT["Hunting Phase"]
B1["Reconnaissance
subdomain enum, tech stack"]
B2["Map attack surface
endpoints, parameters, auth"]
B3["Test for vulnerabilities
OWASP Top 10, logic flaws"]
B4{"Found
vulnerability?"}
B1 --> B2 --> B3 --> B4
B4 -->|"No"| B1
end
subgraph REPORT["Reporting Phase"]
C1["Verify reproducibility
clean environment"]
C2["Assess severity
(CVSS scoring)"]
C3["Write detailed report
steps, impact, proof"]
C4["Submit via platform"]
C1 --> C2 --> C3 --> C4
end
subgraph RESOLUTION["Resolution Phase"]
D1["Triage team reviews"]
D2{"Valid?"}
D3["Severity confirmed
or adjusted"]
D4["Company patches
the vulnerability"]
D5["Bounty paid"]
D6["Duplicate or
out of scope"]
D1 --> D2
D2 -->|"Yes"| D3 --> D4 --> D5
D2 -->|"No"| D6
end
PREP --> HUNT
HUNT --> REPORT
B4 -->|"Yes"| C1
REPORT --> RESOLUTION
End-to-end bug bounty workflow - from program selection through hunting, reporting, and resolution. Each phase has distinct skills and requirements.
Scope: what you can and cannot test
Scope definition is one of the most important parts of a bug bounty program, and reading it carefully before starting research is not optional - it is the difference between a valid report and a legal problem.
A typical scope section defines:
- In-scope assets: Specific domains, IP ranges, mobile apps, or APIs that can be tested. "*.example.com" means all subdomains. "app.example.com only" means just that one.
- Out-of-scope assets: What cannot be tested. This often includes third-party infrastructure, acquisition targets not yet integrated, staging environments, or specific high-risk services.
- Out-of-scope vulnerability types: Common exclusions include self-XSS (you can only attack yourself), rate limiting issues without a demonstrated attack path, missing security headers without a proof of impact, and social engineering of employees.
- Testing restrictions: No DoS, no automated scanning above a certain rate, no accessing customer data, no modifying production data.
Physical security vulnerabilities and wireless attacks are almost universally out of scope for web-focused bug bounty programs. This is relevant for anyone using wireless tools like the BLEShark Nano - the fact that a company's WiFi network is vulnerable to deauth attacks or that their employees might click a rogue portal is real security intelligence, but it is typically not something you can test and claim a bug bounty for without explicit, separate authorization from the company's physical security team. Some companies have physical security programs, but they are rare and usually invitation-only.
graph LR
subgraph SEVERITY["Vulnerability Severity Tiers"]
direction TB
CRIT["Critical (CVSS 9.0-10.0)
$10,000 - $100,000+"]
HIGH["High (CVSS 7.0-8.9)
$2,000 - $15,000"]
MED["Medium (CVSS 4.0-6.9)
$500 - $3,000"]
LOW["Low (CVSS 0.1-3.9)
$100 - $500"]
end
subgraph CRIT_EX["Critical Examples"]
CE1["Remote Code Execution"]
CE2["SQL Injection (data access)"]
CE3["Auth bypass (admin)"]
CE4["SSRF to internal services"]
end
subgraph HIGH_EX["High Examples"]
HE1["Stored XSS (user sessions)"]
HE2["IDOR (other users data)"]
HE3["Privilege escalation"]
end
subgraph MED_EX["Medium Examples"]
ME1["CSRF on sensitive action"]
ME2["Information disclosure"]
ME3["Open redirect chains"]
end
subgraph LOW_EX["Low Examples"]
LE1["Self-XSS"]
LE2["Missing headers"]
LE3["Verbose error messages"]
end
CRIT --> CRIT_EX
HIGH --> HIGH_EX
MED --> MED_EX
LOW --> LOW_EX
style CRIT fill:#3a1a1a,stroke:#ff4444,color:#ff6666
style HIGH fill:#3a2a1a,stroke:#ff8844,color:#ffaa66
style MED fill:#3a3a1a,stroke:#ffff44,color:#ffff66
style LOW fill:#1a3a1a,stroke:#44ff44,color:#66ff66
Bug bounty payout tiers mapped to CVSS severity - critical vulnerabilities like RCE and auth bypass command the highest rewards.
Payout tiers and what they reflect
Payouts are typically structured around severity levels, usually mapped to CVSS (Common Vulnerability Scoring System) scores or a simplified tier:
- Critical (P1): Full account takeover, remote code execution, SQL injection on production database, authentication bypass. $5,000 to $100,000+ depending on the company size and impact.
- High (P2): Privilege escalation, significant data exposure, SSRF with internal access. $1,000 to $10,000+.
- Medium (P3): Stored XSS, IDOR with limited data access, open redirect that enables phishing. $250 to $2,500.
- Low (P4): Reflected XSS, information disclosure of low-sensitivity data. $50 to $500. Some programs do not pay for Low findings at all.
Payout amounts vary enormously by program. Google, Apple, and Microsoft have some of the highest payouts in the industry - critical vulnerabilities in core products can reach $100,000 or more. A small startup might pay $500 for the same finding. The payout reflects both the company's budget and their assessment of the potential business impact of the vulnerability.
The median bug bounty is much lower than the outliers suggest. Most researchers earn $200 to $2,000 per valid finding. The researchers earning six figures are typically very specialized, very fast at finding new vulnerability classes, or have built up reputation that gets them access to private programs with better targets and higher payouts.
Responsible disclosure
Responsible disclosure is the practice of reporting vulnerabilities to the affected organization before making them public, giving the organization time to fix the issue. This is distinct from "full disclosure" (publish immediately regardless of whether a fix exists) and "coordinated disclosure" (agree a timeline with the organization before publishing).
The industry standard timeline is 90 days from initial report to public disclosure, regardless of whether the company has fixed the issue. Google Project Zero popularized this timeline. The idea is that it gives companies adequate time to patch while preventing them from indefinitely suppressing information about vulnerabilities affecting users.
Most bug bounty programs operate under coordinated disclosure, meaning the researcher agrees to hold disclosure until the fix is deployed or the timeline expires. In exchange, the company commits to acknowledging the report, communicating status updates, and paying the bounty in a timely manner.
The legal protections for responsible disclosure are imperfect. In many jurisdictions, the activity of finding the vulnerability - even on an explicitly in-scope target - could theoretically be prosecuted under computer crime laws. This is one reason platforms like HackerOne include a safe harbor provision in their program terms: the company agrees not to pursue legal action against researchers who follow the program rules. Read the safe harbor carefully before submitting.
Wireless and physical: the scope problem
This is worth addressing directly because it is a common point of confusion for people who use wireless security tools.
Physical security and wireless network testing are almost always out of scope for standard bug bounty programs. When a program says "*.example.com is in scope," they mean their web properties - not their office WiFi, not their employees' Bluetooth devices, not their reception desk computer.
There are real vulnerabilities that could be found through wireless and physical testing - a company whose employees all connect to a specific corporate SSID could be tested for rogue AP susceptibility; their devices might respond to BLESpam in ways that reveal internal infrastructure. But testing these without explicit authorization is not covered by the bug bounty safe harbor, even if the vulnerability is real and significant.
Some companies have separate physical pentest-and-a-real-attack">penetration testing programs or red team engagements that cover these areas, but these are contracted engagements, not open bug bounty programs. They require direct negotiation with the company's security team and a formal statement of work.
The practical implication: tools like the BLEShark Nano are excellent for authorized penetration testing engagements and personal research, but the "wireless attack" skills developed with such tools apply to contracted pentest work rather than the open bug bounty programs listed on HackerOne or Bugcrowd.
The culture and community
The bug bounty community is active and generally collaborative, at least among researchers who do not compete directly on the same targets. There is a culture of writeups - after a vulnerability is fixed and disclosed, researchers often publish detailed technical analyses of what they found and how. These writeups are one of the best learning resources available, because they show real vulnerabilities in production systems with full technical detail.
Notable community spaces:
- HackerOne's Hacktivity feed: Public disclosures of vulnerabilities reported through the platform. A goldmine of real finding writeups.
- Twitter/X: Many top researchers are active and post discoveries, techniques, and tooling.
- Discord servers: Several community Discord servers organized around specific vulnerability classes, platforms, or regional communities.
- DEF CON and Black Hat: Annual conferences where significant research is presented publicly. The BugBounty Village at DEF CON is specifically focused on the bug bounty community.
The competitive dynamics are real - duplicate reports (where two researchers find the same vulnerability) are common and typically only the first valid report gets paid. This creates a race dynamic on popular programs, which is one reason experienced researchers often focus on less-crowded programs or specific vulnerability classes where they have developed specialized knowledge.
How to get started
The practical sequence for someone starting out:
1. Learn the fundamentals first. Web application vulnerabilities (OWASP Top 10), HTTP protocol mechanics, common injection types, authentication patterns. PortSwigger's Web Security Academy (free) is the best structured resource. Do not skip straight to finding bugs - you need to understand why vulnerabilities exist before you can find them reliably.
2. Practice on dedicated platforms. HackTheBox, TryHackMe, and PentesterLab have machines and challenges designed to teach real techniques without legal risk. Get comfortable with the methodology before going after live programs.
3. Start with small, low-competition programs. Large companies get hundreds of researchers competing for the same findings. A smaller company with a bug bounty program and less research coverage is easier to find valid bugs in. Look for programs that were recently launched, have low researcher counts, or cover niche products.
4. Read program writeups obsessively. HackerOne's public disclosures show you exactly what kind of findings are valid and how they are documented. Model your reports on the ones that are well-received.
5. Write up everything you find, even if it does not pay. Duplicate findings, out-of-scope items, and informational findings are all learning opportunities. A blog with detailed security research writeups builds your reputation and eventually gets you invited to private programs.
6. Be patient with triage. Reports often sit in triage for weeks. Follow up politely if there is no response after two weeks. Do not spam the security team - it makes a bad impression and does not speed up the process.
Bug bounty hunting is a skill that compounds over time. The first $500 finding takes much longer than the tenth, because by then you know what patterns to look for, how to write reports that triagers approve quickly, and which programs have the highest signal-to-noise ratio for your skills.
For those interested in the wireless and hardware side of security research specifically: the skills developed with tools like the BLEShark Nano - understanding BLE advertising, WiFi handshake capture, HID injection via Bad-BT - translate directly into specialized areas of security research that not many people have hands-on experience with. Firmware security, embedded systems research, and IoT vulnerability research are areas where the bug bounty market is growing and where the competition is less dense than web application testing.