CodePatrol
02 / EU AI Act

Catch AI Act issues before they reach production.

A code review bot that knows the regulation. Annex III high-risk classification, Article 11 technical documentation prompts, prohibited practice detection — flagged on every PR, in plain English.

No spam. One email when beta opens.

github · pull request #1204
PR #1204 · feat: candidate ranking model
def score_candidate(cv):
+ return model.predict(cv.features)
▸ codepatrol · ai act check
This endpoint scores candidates for employment.
Annex III §4(a): employment-related AI is
high-risk. Article 11 documentation required:
· training data sources
· accuracy & fairness metrics
· human oversight design
Reply /open-doc to scaffold.
Where teams get stuck

The regulation lives in PDFs. The code lives in pull requests.

Legal reads, doesn't ship

Your compliance team understands the AI Act. Your engineers ship the features. The translation layer between them is currently a Confluence page nobody reads.

Article 11 sneaks up late

Technical documentation requirements aren't a launch-day task. They accumulate per-feature, per-PR. By the time audit asks, retrofit costs are 10× higher.

High-risk creeps in

A small classifier added to a hiring flow can flip your entire product into Annex III scope. Engineers can't be expected to spot this. Reviewers won't.

Generic AI review misses it

CodeRabbit and Greptile look for bugs. They don't know what Annex III is, and they're not going to learn — it's not their market.

What CodePatrol checks

Every PR, against the regulation.

Annex III

High-risk classification

Detects code paths touching employment, credit, education, biometrics, critical infrastructure. Flags before merge with the relevant Annex section.

Article 5

Prohibited practices

Social scoring patterns, real-time biometric ID, manipulative dark patterns, predictive policing signals. Hard-stop comments — these are unlawful, not risky.

Article 11

Technical documentation

When a PR introduces a regulated system, CodePatrol scaffolds the required technical file: training data lineage, performance metrics, oversight design.

Article 13

Transparency obligations

Flags missing user-facing disclosures for AI-generated content, chatbots, emotion recognition. Generates the disclosure copy where appropriate.

Honest disclaimer

CodePatrol is not a lawyer.

We surface code patterns the AI Act applies to. We don't replace your DPO or counsel. Think of us as the layer between your engineers' next commit and your compliance team's next meeting — one that catches what would otherwise reach them too late.

Closed beta · Summer 2026

Get ahead of the August 2026 enforcement deadline.

No spam. One email when beta opens.