
· 41 min read
Fintech Regulations Decoded: An Engineer's Guide to GDPR, AML, PCI-DSS, MiCA and More
What this whole alphabet soup actually means for your architecture.
Intro: Why These Keep Showing Up in Fintech
You wanted to build the future of money. Instead, you’re reading about GDPR deletion requirements at 2 AM wondering where it all went wrong. I’ve seen teams build slick platforms only to discover their beautiful architecture can’t answer questions regulators expect answered by end of week.
It’s tempting to dismiss regulations as pointless red tape, but most of them exist because something went badly wrong - data breaches, money laundering scandals, people losing their savings. Fintech touches three things governments care deeply about: personal data, the movement of money, and preventing crime. Understanding this helps you predict what’s coming and why.
This guide assumes you’re an engineer building compliant systems, not a compliance officer writing policies. I’m focusing on what each regulation requires from your architecture and what implementation patterns actually work.
GDPR: Personal Data Isn’t Free
Scope and Key Roles
GDPR is the EU’s privacy law that went into effect in 2018. If you process personal data of people in the EU, you’re in scope. Doesn’t matter where your company is or where your servers are. One EU customer = GDPR compliance.
The UK has its own UK GDPR after Brexit, which is essentially a copy-paste of EU GDPR that became law before the split. For now, they’re functionally identical. Treat them as the same requirements unless you’re dealing with cross-border data transfers between UK and EU (more on that later).
Two roles matter:
- Controller: You decide what data to collect and why (most fintech products)
- Processor: You handle data for someone else (payment processors, cloud providers)
Controllers have more obligations and liability. If you’re building a fintech product, you’re almost certainly a controller.
The Schrems II problem: The 2020 EU Court decision invalidated the EU-US Privacy Shield framework. If you’re a US company with EU customers, you need Standard Contractual Clauses (SCCs) plus additional safeguards. This often means encryption where EU entities control the keys, making it technically difficult for US law enforcement to access data even if they demand it.
The Three Principles That Matter
Three principles dominate your engineering work:
Purpose limitation - Use data only for the specific purpose you collected it for. Email for login? Can’t suddenly use it for marketing without new consent. This gets messy with analytics, fraud detection, and debugging - you need documented legitimate reasons for each use.
Data minimization - Don’t collect data you don’t need right now. “What if we need it later?” isn’t a valid reason. Collect it when you have a concrete need, not before.
Retention limitation - Can’t keep personal data forever. You need documented retention periods and actual deletion when they expire. Not soft-delete. Real deletion. From everywhere, including backups.
The rights that create engineering work:
- Right to access (Art. 15) - Users can request all data you have about them in a comprehensible format
- Right to rectification (Art. 16) - Users can correct inaccurate data
- Right to erasure (Art. 17) - The “right to be forgotten” - complete removal of all personal data you’ve stored about them
- Right to data portability (Art. 20) - Users get their data in a structured, machine-readable format (e.g. JSON) to take to a competitor
- Right to restrict processing (Art. 18) - Users can limit how you use their data without full deletion
The Four Legal Bases
To process personal data, you need a lawful basis. In fintech, these four matter:
Consent: User explicitly agreed (checked a box)
- Required for: marketing emails, optional features, cookies beyond strictly necessary ones
- Must be: freely given, specific, informed, unambiguous
- Pre-checked boxes don’t count
- Can’t be a condition of service unless genuinely necessary
- Must be as easy to withdraw as it was to give
Legitimate interest: You have a genuine business reason
- Examples: fraud prevention, system security, improving service for existing customers, direct marketing to existing customers (careful with this one)
- Doesn’t require explicit consent, but you need to document the legitimate interest and show it doesn’t override user rights
- Users can object (Article 21) - you must stop unless you demonstrate compelling grounds. Exception: direct marketing, where the right to object is absolute
Contract: Processing necessary to fulfill your contract with the user
- Examples: processing payments they initiated, providing account access, delivering services they purchased
- This is your main legal basis for core fintech operations
Legal obligation: Required by law
- Examples: AML/KYC compliance, tax reporting, regulatory audits
- Overrides deletion requests
For fintech, contract and legal obligation cover most operational uses. Legitimate interest works for fraud detection and security. Consent is mainly for marketing and optional features.
Building for Privacy
Understanding Personal Data Types
Personal data under GDPR is broad - it’s any information that can identify someone directly or indirectly. The obvious stuff (name, email, phone, address) but also device IDs, IP addresses, cookie identifiers, and even internal user IDs if they link back to a person.
Then there’s special category data - sensitive information with stricter rules under Article 9: racial/ethnic origin, political opinions, religious beliefs, health data, biometrics, sexual orientation. You generally can’t process this without explicit consent or a specific legal exemption. In fintech, you’ll hit this with biometric KYC (facial recognition for identity verification) and potentially health data if you’re doing insurance-adjacent products.
The treatment differs significantly. Regular personal data needs a lawful basis (consent, contract, legitimate interest, legal obligation). Special category data needs that plus an Article 9 exemption - and “legitimate interest” doesn’t cut it here. If you’re doing biometric verification, you need explicit consent or to argue it’s necessary for authentication (which some regulators accept, others don’t).
The Deletion Orchestration Problem
At some point, a customer will email you invoking their “right to erasure under Article 17 GDPR.” Your company has 30 days to comply. What does that actually mean?
First, the good news: the right to erasure isn’t absolute. You don’t have to delete data when:
- You need it for legal compliance (AML record retention, tax records, legal defense)
- You need it to establish, exercise, or defend legal claims
- There’s an overriding public interest
However, if someone signed up three years ago, never transacted, and now wants out - you can’t credibly claim you need their data for AML compliance or legal defense. They go. Same for customers who haven’t been active in years and are well past any legal retention period. The exemptions are real, but they’re not a blanket excuse to keep everything forever.
This creates a nasty conflict in fintech: GDPR says delete, but other regulations say keep data for 5-7 years. The answer: figure out what’s actually legally required to be kept. Delete the rest.
Deletion across distributed systems means:
- Identify all data stores with user data (relational DBs, document stores, search indices, data warehouses, logs)
- Handle cascading deletions correctly (delete user → what about their transactions? reviews? support tickets?)
- Deal with backups (you can keep backups for disaster recovery, but if you restore one, you must re-apply deletion requests)
- Handle derived data
- Update aggregated metrics where the user might be re-identifiable
If you’re wondering why deleting a single user requires orchestrating seventeen microservices and a prayer - welcome to GDPR in a distributed system.
Some practical tips:
Figure out the bare minimum - You often don’t need to delete entire rows. Sanitizing or hashing PII (name, email, phone) while keeping the row with its ID intact preserves referential integrity across tables. A user record with
name: "[DELETED]"andemail: "[email protected]"is still useful for foreign key relationships.Document the deletion saga - Map out exactly what needs to be removed or sanitized in which databases, caches, search indices, and third-party systems. This becomes your deletion runbook.
Orchestrate with idempotency - Deletion jobs will fail partway through. Build them so they can be safely retried without causing errors or duplicate deletions. Track state: “pending,” “in progress,” “completed,” “failed.”
Logging Strategy
The get out of jail free card: never log PII in the first place.
Structure your logs to avoid personal data:
- Log user IDs, not names or emails
- Log transaction IDs, not account numbers
- Log event types, not message contents
- Log IP addresses only to the extent needed for security
Then run automated sanitization on logs before they hit your log aggregation system. Libraries, log processors, or third-party services can scan for known PII patterns (emails, phone numbers, card numbers) and redact them. Ideally, set up alerting when PII slips through - you want to know when someone accidentally logs a full customer record so you can fix the code, not discover it during an audit.
An example two-tier approach:
- Short-term detailed logs (7-30 days): Kept locally on servers, may contain some PII for emergency debugging, automatically purged
- Long-term structured logs: Sanitized, no PII, kept for compliance and long-term analysis
If you must log PII temporarily, encrypt it with a key you rotate and eventually delete - once the key is gone, the data is effectively unrecoverable even if the logs persist somewhere (crypto erasure).
Data Portability Implementation
Build an export mechanism that outputs structured JSON (not a raw database dump). Include all data about the user, organized sensibly.
Common pattern: background job generates a ZIP file, email the user a download link with proper authentication. Make sure the export is comprehensive - users often request this before switching to a competitor, so missing data looks bad.
Cross-Border Data Transfers
If you’re moving data outside the EU or UK, you need safeguards:
EU to US transfers:
- Standard Contractual Clauses (SCCs) with data recipients
- Transfer Impact Assessment documenting why the transfer is safe
- Additional safeguards: encryption with EU-controlled keys, access controls, legal commitments
UK to EU transfers:
- Currently work via adequacy decision (UK considered “adequate” by EU)
- May change if UK law diverges significantly from GDPR
- Watch for regulatory updates
EU to anywhere else:
- Check if the country has an adequacy decision (Switzerland, Japan, Canada, etc.)
- If not, use SCCs + Transfer Impact Assessment
- For some countries (China, Russia), data localization laws may require in-country storage
Practical approach: use EU/UK cloud regions for EU/UK customer data where possible.
Common Pitfalls
The backup trap - Teams forget about backups entirely. You can keep backups for disaster recovery, but if you restore one, you must re-apply all deletion requests that happened since that backup. This means keeping a log of deletion requests indefinitely (which itself is fine - user IDs of deleted users aren’t personal data in isolation).
The aggregate data trap - “We deleted the user but kept aggregated metrics that include them.” This is fine if the aggregation is large enough that individuals can’t be re-identified. Rule of thumb: if a metric includes fewer than 10-15 people, it might be re-identifiable.
The third-party trap - Your payment processor, KYC provider, email service, analytics platform - they’re all processing personal data. You need Data Processing Agreements (DPAs) with all of them. Make sure their terms allow you to fulfill deletion requests and handle the data according to GDPR.
The “we’re just a processor” trap: If you make any decisions about what data to collect or how to use it, you’re a controller, not just a processor. Being a processor sounds easier but rarely applies to product companies.
The ML model trap: If you trained a model on user data and they request deletion, you have a problem. Options: retrain the model without their data (expensive), document that the model is aggregated and anonymized (only works if true), or accept that you need to keep data for legitimate interest in maintaining service quality (risky, need strong justification).
Enforcement Reality
GDPR has real teeth. Fines up to €20M or 4% of global annual revenue, whichever is higher. UK GDPR has similar penalties.
What actually triggers enforcement:
- Data breaches with poor security practices
- Ignoring user rights requests (especially repeated ones)
- Processing data without documented legal basis
- International data transfers without proper safeguards
- No response to regulator inquiries
What doesn’t typically trigger enforcement:
- Small technical violations if you’re making good faith efforts
- Imperfect deletion (as long as you have a process)
- Reasonable delays in responding to requests (30 days is the standard, 90 days for complex requests)
The key: have a system to handle requests, document your legal bases, and respond when regulators ask questions. Perfect compliance on day one matters less than demonstrating you take it seriously.
AML / CTF / Sanctions: KYC and Keep KYCing
The KYC Mandate
Anti-Money Laundering (AML) and Counter-Terrorist Financing (CTF) regulations apply to virtually all financial services. If you’re moving money, holding money, or facilitating transactions, you’re in scope. The regulations are country-specific but follow international standards from the Financial Action Task Force (FATF).
In the EU, it’s primarily the Anti-Money Laundering Directives (currently 5th and 6th AMLDs). In the UK, it’s the Money Laundering Regulations 2017 (as amended). In the US, it’s the Bank Secrecy Act and related regulations enforced by FinCEN. Different laws, same basic requirements.
The core requirement: Know Your Customer (KYC) when they onboard, and monitor their transactions continuously for suspicious activity. This isn’t a one-time check - it’s ongoing surveillance.
What KYC Actually Means
Know Your Customer (KYC)
KYC is more than “take a selfie with an ID.” You need to:
Verify identity using reliable, independent sources, for instance:
- Government-issued ID (passport, driver’s license, national ID card)
- Proof of address (utility bill, bank statement, government correspondence)
- Biometric verification (selfie matching the ID photo)
Understand the customer’s profile:
- Source of funds (where does their money come from?)
- Purpose of the relationship (what will they use your service for?)
- Expected transaction patterns (volume, frequency, amounts, countries)
Assess risk level:
- Geographic risk (are they from a high-risk country?)
- Product risk (are they using high-risk products like crypto or international transfers?)
- Customer risk (are they a politically exposed person, or PEP?)
Enhanced Due Diligence (EDD) for high-risk customers:
- Additional identity verification
- Source of wealth documentation (not just source of funds)
- Ongoing monitoring with lower thresholds
- Senior management approval
The risk-based approach means you don’t treat everyone the same. Low-risk customers get streamlined onboarding. High-risk customers get scrutinized heavily.
Ongoing Monitoring
KYC isn’t one-and-done. You need continuous transaction monitoring looking for:
Structured transactions (structuring/smurfing):
- Multiple transactions just below reporting thresholds
- Example: 5 deposits of $9,000 instead of one $10,000 deposit
- Automated alerts when patterns emerge
Unusual activity relative to customer profile:
- Sudden spike in transaction volume
- Transactions inconsistent with stated business purpose
- Round-dollar amounts in unusual patterns
- Transactions to/from high-risk countries
Typologies of money laundering:
- Rapid movement of funds (layering)
- Transactions with no apparent economic purpose
- Use of shell companies or nominees
- Trade-based money laundering patterns
When something looks suspicious, you file a Suspicious Activity Report (SAR) with your financial intelligence unit (FIU). In the US, that’s FinCEN. In the UK, it’s the National Crime Agency. In most EU countries, there’s a national FIU.
Critical rule: you cannot tell the customer you’ve filed a SAR. This is called “tipping off” and is itself a crime. Your system needs to handle SAR workflows without leaving visible traces to the customer.
Record Retention
AML regulations require keeping records for 5+ years depending on jurisdiction:
- EU: 5 years from end of business relationship (some member states extend this)
- UK: 5 years from end of business relationship
- US: 5 years from transaction date (SARs: 5 years from filing date)
Important: the clock typically starts from “end of relationship,” not from when you collected the data. A customer who’s been with you for 10 years means you’re keeping their onboarding documents for 15 years total. Also, if there’s an ongoing investigation, you may need to retain records indefinitely until it concludes.
This applies to:
- Identity verification documents
- Transaction records
- Risk assessments
- SAR filings and supporting documentation
- Correspondence with customers about their activity
This directly conflicts with GDPR deletion requests. As mentioned earlier, AML legal obligations override GDPR. You keep the data and document why.
Risk as a First-Class Citizen
Customer and Risk as First-Class Entities
Your data model needs to treat customer risk as a core entity, not an afterthought.
Key entities:
- Customer: Basic identity and relationship data
- RiskProfile: Current risk score, risk factors, assessment date
- KYCCheck: Each verification attempt (ID check, address check, biometric, date, result, provider)
- RiskAssessment: Periodic reviews of customer risk (should happen at regular intervals, triggered by life events, or when patterns change)
- Transaction: All the usual fields plus risk-relevant metadata (country, counterparty, purpose codes)
Customer risk isn’t static. You need to reassess periodically:
- Low risk: every 2-3 years
- Medium risk: annually
- High risk: every 6 months or continuously
Life events trigger re-assessment:
- Large change in transaction patterns
- Change of address to high-risk country
- New adverse media or sanctions list hit
- Customer requests products/services inconsistent with their profile
Building KYC Flows
Modern KYC is mostly outsourced to specialized providers, but you still own the orchestration:
Identity verification providers:
- Onfido, Jumio, Veriff for ID + selfie verification
- They handle the hard parts: ID document recognition, liveness detection, face matching
- You get back: verification result, extracted data, confidence scores
Document verification:
- Usually handled by same providers as identity verification
- For proof of address, often manual review is still needed
Data enrichment:
- Credit bureaus for additional identity confirmation
- Company registries for business customers
- Open data sources for reputation checks
Your flow typically looks like:
- Customer submits ID and selfie
- Call identity verification API
- If verification passes, extract and store identity data
- Run initial risk assessment based on country, product, and profile
- If high risk, request additional documents or reject
- Store all verification data and decisions with timestamps
- Create initial RiskProfile record
The trick is handling failures gracefully. Identity verification fails surprisingly often (bad lighting, expired documents, name mismatches). You need retry logic, fallback to manual review, and clear user communication.
Transaction Monitoring
Real-time vs. batch monitoring:
Real-time monitoring:
- Runs synchronously during transaction processing
- Checks against hard rules: sanctions lists, transaction limits, obvious patterns
- Must be fast (<100ms added latency)
- Decision: allow, block, or queue for manual review (card auths need instant decisions; withdrawals can wait)
- Use for high-confidence, high-severity scenarios
Batch monitoring:
- Runs periodically (hourly, daily) over historical transactions
- Can use more complex ML models and cross-customer analysis
- Generates alerts for investigation, doesn’t block transactions
- Use for pattern detection and lower-confidence scenarios
Sanctions Screening
Sanctions screening deserves its own section because it’s technically different from AML. You’re checking customers and transactions against government-published lists:
Key lists:
- OFAC (US Office of Foreign Assets Control): SDN list, sectoral sanctions
- EU sanctions lists: consolidated list from EU External Action Service
- UN sanctions lists: individuals and entities
- UK sanctions lists (post-Brexit, maintained separately)
When to screen:
- Customer onboarding (check name, address, date of birth)
- Ongoing (daily or weekly rescreening of all customers as lists update)
- Transactions (check beneficiary name, banks involved, countries)
The fuzzy matching problem:
- Lists contain names in various languages and transliterations
- Need fuzzy matching algorithms (Levenshtein distance, phonetic matching)
- This generates false positives constantly
- “John Smith” might match dozens of sanctioned individuals named “Jon Smythe” or similar
Managing false positives:
- First-time matches: manual review required
- Create “whitelist” of reviewed false positives to suppress future alerts
- Store the review decision, who made it, and supporting evidence
- Re-review periodically because sanctions lists change
Blocking vs. rejecting:
- Sanctions hits must be blocked (transaction rejected, funds frozen if already received)
- This is different from AML suspicious activity (where you might process the transaction but file a SAR)
- Blocking creates immediate customer service issues - prepare for that
Third-Party Screening Services
Most companies use third-party APIs for screening:
Popular providers:
- Dow Jones Risk & Compliance
- Refinitiv World-Check
- ComplyAdvantage
- Chainalysis (for crypto-specific screening)
These services provide:
- Consolidated sanctions lists from multiple jurisdictions
- PEP (Politically Exposed Persons) databases
- Adverse media screening (negative news about individuals/companies)
- Fuzzy matching algorithms
- Regular list updates
You still own the integration and workflow:
- Call the API during onboarding and periodically
- Store the screening results and timestamps
- Handle matches with manual review workflow
- Document all decisions
Case Management and Investigation Workflow
When your monitoring system flags something suspicious, you need a case management system:
Typical workflow:
- Alert generated by automated monitoring
- Assigned to compliance analyst
- Analyst reviews: transaction history, customer profile, supporting documents
- Decision: false positive (close), needs more info (request from customer), or suspicious (escalate)
- If suspicious: draft SAR, get approval, file with FIU
- Log all actions and decisions
This workflow needs to be separate from your main customer-facing systems. Remember: you can’t tip off the customer that they’re under investigation.
Key features needed:
- Queue of open cases with priority and age
- Full transaction history and customer profile in one view
- Ability to attach notes and supporting documents
- Approval workflow for SARs
- Audit log of every action taken
- Reporting and metrics (how many alerts, false positive rate, time to resolution)
When Regulators Come Knocking
AML violations carry serious penalties: fines, loss of license, and criminal charges for executives in egregious cases.
What triggers enforcement:
- Failure to file SARs on obvious suspicious activity
- No KYC program or inadequate KYC
- Ignoring red flags or high-risk customers
- No transaction monitoring system
- Poor record keeping
- Processing transactions for sanctioned individuals/entities
The regulators’ perspective: they’re less concerned with perfect accuracy than with demonstrable effort. Having a system, documenting decisions, and showing you take it seriously goes a long way. But ignoring red flags or having no system at all? That’s when people lose licenses and face criminal charges.
PCI-DSS: If You Touch Card Data
What It Is and When You Can’t Avoid It
PCI-DSS is the payment card industry’s way of saying “if you want to play with cards, here are the rules.” It’s not a law - it’s a contractual requirement from Visa, Mastercard, Amex and related players. Break the rules, and they’ll fine you or cut off your ability to process cards entirely.
The scope question: do you store, process, or transmit cardholder data? If yes, you’re in scope. This applies whether you’re:
- A merchant - accepting card payments
- An issuer - creating and managing cards for customers
- A service provider - handling card data for others (usually: managing cards, processing payments, tokenizing cards)
Cardholder data that triggers PCI scope:
- Primary Account Number (PAN) - the actual card number
- Cardholder name
- Card expiration date
- Service code
Sensitive Authentication Data (SAD) - the high-risk stuff:
- CVV/CVC codes
- Full magnetic stripe data
- PIN or PIN blocks
Who can store what:
- Merchants - Can store cardholder data (encrypted), but cannot store SAD after authorization.
- Issuer Service Providers - Can store SAD - as they usually manage the cards issuance and their lifecycle. It requires HSM (Hardware Security Module) protection, strict key management, and the full weight of PCI-DSS compliance.
Merchant vs Service Provider Levels
For merchants (accepting payments), you’re categorized by transaction volume:
- Level 1 - 6M+ transactions/year, or you got breached (forced into Level 1 as punishment)
- Level 2 - 1-6M transactions/year
- Level 3 - 20K-1M e-commerce transactions/year
- Level 4 - Under 20K e-commerce or under 1M total
Higher levels mean more scrutiny. Level 1 requires an annual on-site assessment by a Qualified Security Assessor (QSA). Lower levels can self-certify compliance by completing a Self-Assessment Questionnaire (SAQ).
For service providers (handling card data for others):
- Level 1 - 300K+ transactions/year - requires QSA assessment
- Level 2 - Under 300K transactions - can self-assess
The Strategy for Everyone: Stay Out of Scope
Whether you’re a merchant or issuer, to the extent you can, your primary PCI strategy should be the same: stay out of scope.
For merchants: Use Stripe Checkout, PayPal, or hosted payment pages. User enters card details on the processor’s page, you get back a token. You’re now in SAQ A compliance, which is ~22 questions instead of 300+.
For issuers: Use an issuer processor like Marqeta, Galileo, Thredd (formerly GPS), or Stripe Issuing. They handle the card lifecycle - generation, storage, authorization - and you interact via APIs using tokens. You never see PANs. They’re PCI compliant, you’re not in scope.
This is the model most basic fintech card programs use. You want to launch a debit card or corporate card program? You don’t build a card processor. You use Marqeta’s API to create cards, they return a token, you store the token. When your customer needs to see their card number in your app, you call Marqeta’s API with the token, they return the PAN (usually with additional authentication requirements), and you display it briefly.
The only time you should consider handling card data directly:
As a merchant:
- You’re processing hundreds of millions and processor fees hurt more than compliance costs
- You have specific business needs requiring direct card-on-file handling
- You’re becoming a payment facilitator
As an issuer:
- You’re building an issuer processor business yourself (like Marqeta, Galileo did)
- You’re a BIN sponsor wanting vertical integration to maximize margins on per-transaction fees
- You need full control over real-time authorization decisions (e.g. crypto-based ledger etc.)
- You’re operating in a market where processor options are limited or expensive
- You’ve hit scale where processor fees exceed the cost of compliance
Otherwise? Stay out of scope. The compliance burden isn’t worth it.
When You’re Actually In Scope
If you have a legitimate reason to handle card data directly:
Segmenting the Cardholder Data Environment
Network segmentation is your friend. Create a Cardholder Data Environment (CDE) that’s isolated from everything else:
- CDE: Systems that store, process, or transmit cardholder data
- Non-CDE: Everything else (marketing site, support tools, analytics, internal apps)
The fewer systems in the CDE, the fewer things you need to secure to PCI standards, and the fewer things you need to cover during quarterly ASV (Approved Scanning Vendor) scans and annual assessments.
Data Storage Requirements
Rendering PANs unreadable (Requirement 3.4):
Your options: strong encryption, truncation, tokenization, or one-way hashing.
What won’t work:
- Disk-level encryption (BitLocker, LUKS) - explicitly banned for online systems. Once you’re logged in, data is decrypted. Only acceptable for removable media.
- Transparent Database Encryption (TDE) alone - same problem. Database compromise = unencrypted access.
What’s required:
- Column or field-level encryption with keys managed separately
- Tokenization (replace PAN with non-reversible token, store mapping in secure vault)
- One-way hashing - but as of PCI DSS 4.0, must use keyed cryptographic hashes (HMAC, KMAC, CMAC, or GMAC). Plain SHA-256 won’t cut it anymore.
- Key management (Requirements 3.5-3.7): keys in HSM or dedicated KMS, access restricted to minimum custodians, documented lifecycle (generation, rotation, retirement, destruction), immediate retirement if compromise suspected
- For display: mask to show only first 6 and last 4 digits
SAD storage (Requirement 3.3):
Merchants cannot store SAD after authorization - not encrypted, not hashed, not at all. Issuers can store SAD (Requirement 3.3.3) if there’s a documented business need, it’s HSM-encrypted, and storage is minimized.
Implementation Considerations
Access control: Every person who can see PANs needs a justified business reason. Log every access. Build role-based access controls from day one.
Logging: You need detailed audit logs, but cannot log PANs or CVVs. Strip card fields from request logs, exception messages, debug output. Set up automated scanning to detect PANs in logs and alert immediately.
Third-party services: Error tracking (Sentry, Bugsnag) or analytics capturing request bodies with card data? Now that third party needs to be PCI compliant too. Whitelist what you send.
Dev/test environments: Never use production PANs. Use test card numbers or synthetic data. If you must copy production data, irreversibly mask PANs first.
Backups: Need the same encryption and access controls as production. Cryptographically erase or physically destroy retired backups.
When You’re Breached
If card data leaks, you’re automatically elevated to Level 1 regardless of volume. This triggers mandatory annual QSA audits plus:
- Fines from card brands ($5K-$100K/month until compliant)
- Increased processing fees
- Potential termination of your merchant/issuer account
- Forensic investigation costs
- Customer notification costs
- Reputational damage
The card brands care more about whether you were trying to be compliant than perfect compliance. No PCI program + breach = severe penalties. Active program + breach = still expensive but more understanding.
The Bottom Line
Default strategy for everyone: Stay out of scope. Use processors that handle the PCI burden. For merchants, use Stripe/PayPal. For issuers, use Marqeta/Lithic/Thredd.
If you must be in scope: Invest in proper segmentation, HSMs, and access controls from day one. Retrofitting PCI compliance onto an existing system is cumbersome. Budget tens of thousands annually just for audits, plus ongoing security infrastructure.
Licensing Requirements: EMI / PI / Bank-Lite
PI vs EMI vs Bank
If you’re holding customer money or moving it around in the EU/UK, you need a license. Three main types:
Payment Institution (PI): Execute payments, issue cards, acquire merchants. Cannot hold customer funds overnight (with some exceptions). Capital: €20K-€125K.
E-Money Institution (EMI): Everything a PI can do, plus hold customer balances. Capital: €350K minimum. All customer funds must be safeguarded.
Bank (Credit Institution): Everything an EMI can do, plus take deposits and make loans. Capital: €5M+ plus risk-weighted requirements. Deposits protected by insurance (€100K/customer in EU).
The critical distinction: e-money isn’t a deposit. Deposits earn interest and are protected by deposit insurance. E-money is just a claim on the issuer, backed by safeguarded funds. This is why EMIs have lower capital requirements.
Most fintechs start with EMI licenses - flexibility without bank-level capital. PIs are more limited since you can move money but not really hold it.
Safeguarding in Practice
Safeguarding means customer funds are legally segregated from your corporate funds. If you go bankrupt, customers get their money back before your creditors. Two options:
Option 1: Segregated accounts at a credit institution
- Open dedicated bank accounts for customer funds
- These accounts are legally earmarked as customer money
- Cannot be used for your own purposes
- In bankruptcy, these funds are outside the insolvency estate
Option 2: Insurance or guarantee
- Customer funds can be in regular accounts
- You maintain insurance covering the full amount
- Insurance kicks in if you fail
Nearly everyone uses segregated accounts these days, since insurance tends to be expensive and harder to get.
In practice: you maintain separate bank accounts for corporate funds vs customer funds. Daily reconciliation ensures safeguarding account balances match (or exceed) customer balances in your ledger. If they don’t, you transfer from corporate to safeguarding immediately.
Safeguarding in Code
Segregated Account Structures and Fund Flows
Your payment flows need to respect safeguarding:
Customer deposits money:
- Money arrives in your bank account (often a collection account)
- You credit customer’s e-money balance in your ledger
- You transfer money from collection account to safeguarding account
- Reconciliation shows customer balances = safeguarded funds
Customer spends money:
- Transaction is authorized
- You debit customer’s e-money balance in your ledger
- Funds move from safeguarding to settlement (e.g., outgoing account earmarked for card network settlement)
- Reconciliation still shows customer balances = safeguarded funds
Money moves between accounts constantly, but the safeguarding equation must hold at all times. This means your fund flow system needs to:
- Track which bank account holds which funds
- Know when money is “in flight” between accounts
- Automatically trigger transfers to maintain safeguarding ratios
- Alert if safeguarding falls below required levels
Teams would usually build a “treasury management” service that:
- Monitors all bank account balances (via API or manual reporting)
- Calculates required safeguarding based on customer ledger balances
- Triggers transfers between accounts to maintain compliance
- Generates daily reports for compliance team
Reconciliation
You need daily reconciliation between internal accounts and flows, external bank accounts, liquidity pools (e.g., on external exchanges), and payables (e.g., card scheme settlement).
Audit Trails and Regulatory Reporting
Regulators will audit your safeguarding. They want to reconstruct your position at any point in time: total customer balances, safeguarding account balances, in-flight transfers. For any customer: their balance history and which transactions affected it.
You’ll need to report monthly or quarterly: total e-money outstanding, total funds safeguarded, breakdown by currency, transaction volumes. Build reporting into your system from day one - manual spreadsheet calculations don’t scale.
Event sourcing or append-only logs work well here. Every state change is an event you can replay to reconstruct historical state.
License Violations Hurt
License violations are serious:
- Operating without a license: Regulators shut you down, fines, potential criminal charges for executives
- Safeguarding breaches: Immediate regulatory intervention, restrictions on taking new customers, potential license revocation
- Poor reconciliation: Requires remediation plan, enhanced monitoring, fines if repeated
Regulators (FCA in UK, various NCAs in EU) conduct regular audits of licensed institutions. They’ll look at:
- Safeguarding reconciliation records
- Audit trails of fund flows
- Compliance with capital requirements
- Governance and risk management frameworks
- Customer complaint handling
The bar is high. This isn’t “startup in a garage” territory - you need proper financial controls, compliance staff, and governance. Budget accordingly.
MiCA: Europe Regulates Crypto
Europe’s Crypto Framework
Markets in Crypto-Assets (MiCA) is the EU’s comprehensive regulatory framework for crypto that came into full effect in 2024-2025. It’s the first major jurisdiction to create actual regulation (not just guidance) for crypto assets.
MiCA applies if you’re operating crypto services in the EU. It doesn’t matter where your company is - if you’re serving EU customers with crypto products, you’re in scope.
The regulation creates a licensing regime for Crypto Asset Service Providers (CASPs) and specific rules for different token types. Think of it as the EU saying “crypto is real finance now, here are the rules.”
Token Classifications
MiCA categorizes tokens into three types, each with different requirements:
Asset-Referenced Tokens (ARTs) - Stablecoins backed by a basket of assets or currencies (not just one fiat currency). Example: if you created a stablecoin backed by a mix of USD, EUR, and gold. Heavily regulated, need authorization as a credit institution or specific ART issuer license.
E-Money Tokens (EMTs) - Stablecoins pegged to a single fiat currency (USDC, USDT equivalents in EU). Must be issued by licensed e-money institutions or credit institutions. Same safeguarding requirements as regular e-money - funds backing the tokens must be held in segregated accounts.
Utility Tokens - Everything else that’s not a security or stablecoin. Used to access a service or product. Lighter regulation unless they become significant (over €15M market cap or 5M+ users, then you need a whitepaper and consumer protections).
The classification matters because it determines your licensing requirements and ongoing obligations. Get it wrong and you’re operating illegally.
When You’re a CASP
You’re a Crypto Asset Service Provider if you’re doing any of these activities professionally in the EU:
- Custody and administration - Holding crypto assets on behalf of customers (any wallet service where you control the keys)
- Exchange services - Operating a crypto exchange platform
- Execution of orders - Taking customer orders and executing trades
- Placing of crypto-assets - Acting as an intermediary in token sales/ICOs
- Reception and transmission of orders - Routing customer orders to exchanges
- Providing advice - Giving personalized recommendations on crypto investments
- Portfolio management - Managing crypto portfolios for customers
- Transfer services - Moving crypto between wallets for customers
If you’re doing any of this, you need a CASP license from an EU member state regulator. The license is passportable - get it in one EU country, you can operate across the EU.
What MiCA Requires
Reserve Requirements for Stablecoins
If you’re issuing EMTs or ARTs, you need reserves backing the tokens at all times:
For EMTs (single-currency stablecoins):
- 100% reserves in high-quality liquid assets
- Held in segregated accounts at credit institutions
- Can’t be used for your own purposes (safeguarding rules, like e-money)
- Daily reconciliation between tokens issued and reserves held
- If you’re “significant” (over €5B reserves, 10M+ holders, or 2.5M+ transactions/day worth €500M+), additional capital requirements and EBA supervision kick in
For ARTs (basket-backed stablecoins):
- Reserve assets proportional to the composition of the basket
- More complex valuation and custody requirements
- Capital requirements based on size (minimum €350K, scales up)
- If you’re “significant” (€5B+ or systemic importance), even stricter rules
The trick here is real-time or near-real-time reconciliation. You can’t have more tokens circulating than you have reserves. This means your ledger tracking token issuance needs to stay in sync with your reserve account balances.
Custody and Segregation Requirements
If you’re holding crypto for customers (which most CASPs do), you need:
Segregation: Customer assets must be segregated from your own assets. You can’t use customer crypto for your operations, trading, or lending without explicit consent.
Key management: Private keys must be held securely. MiCA doesn’t mandate specific technology (HSM vs. MPC vs. multisig), but you need documented controls and insurance or equivalent guarantees.
Bankruptcy protection: If you go bankrupt, customer assets shouldn’t be part of the estate available to creditors. This requires legal and operational separation.
Most platforms implement this with:
- Separate hot wallets for customer funds vs. company operational funds
- Cold storage for majority of customer assets
- Clear on-chain or off-chain records showing which assets belong to which customers
- Legal agreements establishing custody relationship
Transaction Reporting and Recordkeeping
CASPs need to maintain detailed records:
- All orders and transactions (including failed ones)
- Customer identification and due diligence records
- Custody records showing which assets belong to which customers
- Communications with customers about orders/transactions
- Records of complaints and how they were resolved
Retention: typically 5 years, similar to MiFID requirements for traditional financial services.
You also need to report certain transactions to regulators, though the specific reporting framework is still being finalized. Expect something similar to securities transaction reporting.
Operational Resilience and Business Continuity
MiCA incorporates DORA requirements (more on DORA later), meaning:
- Documented ICT risk management
- Business continuity plans with specific RTO/RPO targets
- Regular testing of disaster recovery
- Incident reporting to regulators within tight timeframes
- Third-party risk management for critical service providers
For crypto specifically, this means you need plans for:
- Exchange or blockchain outages
- Loss of access to custody systems
- Smart contract failures
- Oracle failures (for anything using price feeds)
Market Abuse Rules
If you’re operating a trading platform, you need to detect and prevent:
- Market manipulation (wash trading, spoofing, pump and dump schemes)
- Insider trading (trading on non-public information)
- Unlawful disclosure of inside information
This means building surveillance systems similar to what securities exchanges use. You’re looking for patterns like:
- Self-trading or circular trading
- Large orders placed and canceled without execution (spoofing)
- Coordinated trading activity
- Trading ahead of announcements
Custody and Segregation
Wallet and Custody Layer Design
The segregation requirements fundamentally shape your architecture. You need separate wallet infrastructure for:
Customer funds: The crypto you’re holding for customers
- Hot wallets - Small amounts for operational liquidity (withdrawals, trades)
- Warm wallets - Medium amounts with some online access but additional controls
- Cold storage - Majority of assets, offline or with strict access controls
Company operational funds: The crypto you use for operations
- Trading inventory (if you’re market making)
- Fee collection wallets
- Liquidity for operations
Treasury reserves: For stablecoin issuers, the reserves backing tokens
- Separate from both customer funds and operational funds
- Need real-time visibility into reserve levels
- Integration with traditional banking for fiat reserves
The key is being able to prove segregation at any moment. This usually means:
- Distinct wallet addresses for each category
- On-chain transparency (publish wallet addresses)
- Off-chain database tracking which assets belong to which customers
- Regular attestation reports (daily or weekly) showing segregation is maintained
Token Issuance and Reserve Management
For stablecoin issuers, you need real-time tracking of:
Tokens outstanding: How many tokens exist across all blockchains
- On Ethereum, Polygon, BSC, etc.
- Including tokens held in smart contracts
- Burned tokens should reduce outstanding supply
Reserve assets: What backs those tokens
- Fiat in bank accounts
- Government bonds or money market funds (for yield-bearing stablecoins)
- Other reserve assets specified in your whitepaper
The system must prevent issuing tokens without corresponding reserves. This usually means:
- Mint request comes in (customer wants to buy tokens)
- Verify reserve assets are available
- Lock the reserves (mark as allocated)
- Mint tokens on-chain
- Deliver tokens to customer
- Update reconciliation ledger
If any step fails, the whole process should roll back atomically.
Building Market Abuse Detection
If you’re running an exchange, you need surveillance systems. These typically watch for:
Wash trading: User trades with themselves to fake volume
- Detection: Look for trades between accounts with same owner, same IP, or linked KYC
- Also watch for circular trading patterns across multiple accounts
Spoofing: Large orders placed to move price, then canceled
- Detection: Track order placement and cancellation patterns
- Flag accounts with high cancel-to-fill ratios on large orders
Pump and dump: Coordinated buying to inflate price, then selling
- Detection: Unusual trading volume spikes + multiple accounts buying simultaneously + rapid selling after price increase
- Cross-reference with social media activity and customer communications
Front-running: Trading ahead of known large orders
- Detection: Trades immediately before large orders from different accounts
- Compare with access to order book data or customer service access logs
These systems generate alerts that compliance analysts review. You can’t just auto-block because false positives are common, but you need documentation showing you’re actively monitoring.
MiCA Enforcement
MiCA enforcement is now active. Penalties can be:
- Fines up to €5M or 3-5% of annual turnover (individual companies)
- Personal liability for executives
- Withdrawal of CASP license
- Criminal penalties for serious violations
What triggers enforcement:
- Operating without a license when required
- Misclassifying tokens to avoid stricter requirements
- Inadequate custody controls leading to customer losses
- Reserve deficiencies for stablecoins
- Failure to detect or report market abuse
- Poor AML/CTF controls (MiCA sits on top of existing AML requirements)
The EU is serious about this. If you’re in crypto and serving EU customers, you either get licensed or you exit the EU market. There’s no gray area anymore.
Travel Rule: The Regulation Crypto Can’t Escape
What Travels With Money
The Travel Rule comes from FATF Recommendation 16. It’s not crypto-specific - it originally applied to traditional wire transfers. The rule says: when you move money, information about the sender and recipient must “travel” with the transaction.
For traditional finance, this happens through SWIFT messages. For crypto, there’s no built-in mechanism, which is why this rule is such a pain.
FATF updated the guidance in 2019 to explicitly include crypto assets. Now Virtual Asset Service Providers (VASPs - basically any regulated crypto business) must collect and transmit:
Originator information:
- Name
- Account number or wallet address
- Physical address or national identity number or customer ID
Beneficiary information:
- Name
- Account number or wallet address
The Threshold Problem
The Travel Rule typically applies to transactions over a threshold:
- US (FinCEN): $3,000 for domestic, $250 for international
- EU (5AMLD/6AMLD): €1,000
- UK: £1,000
- Singapore: No threshold (all transactions)
- Other jurisdictions: Varies, check local implementation
This creates complexity because you need to know the transaction amount in local currency at the time of the transaction, which requires real-time exchange rate data for crypto.
What You Need to Collect and Transmit
When a customer initiates a withdrawal over the threshold, you need to collect beneficiary information:
If sending to another VASP (regulated exchange, wallet service):
- Beneficiary name
- Beneficiary account/wallet address
- Receiving VASP name and address
If sending to a self-hosted wallet (customer’s own wallet, not at a VASP):
- Beneficiary name (often the same customer)
- Wallet address
- Attestation that the customer owns/controls the wallet
Then you need to actually transmit this information to the receiving VASP.
Travel Rule Solution Providers
Most VASPs don’t build Travel Rule infrastructure themselves. Instead, they use solution providers.
Popular providers:
- Notabene
- Sygna
- CipherTrace Travel Rule
- TRP (Travel Rule Protocol)
These providers operate networks where VASPs can exchange Travel Rule data:
- You initiate a withdrawal to another VASP
- Your system calls the Travel Rule provider API with IVMS101 data
- Provider looks up the receiving VASP in their network
- Provider securely transmits data to receiving VASP
- Receiving VASP screens the data and either accepts or rejects
- You receive confirmation and can proceed with on-chain transaction
The key is the network effect - both sender and receiver need to be using compatible Travel Rule solutions. If the receiving VASP isn’t in the network, you’re back to manual processes (email, web forms, etc.).
Logging and Audit Trail
For compliance, you need to log:
- Travel Rule data collected for each transaction
- Whether Travel Rule exchange succeeded or failed
- Attestations for self-hosted wallet transactions
- Screening results on beneficiary data
- Any rejections from receiving VASPs and reasons
Retention: typically 5-7 years, same as other AML records.
DORA: Operational Resilience Becomes Law
Who’s In Scope
The Digital Operational Resilience Act (DORA) is EU regulation that came into full effect January 2025. It explicitly covers 20+ types of financial entities:
- Credit institutions (banks)
- Payment institutions and e-money institutions
- Investment firms and management companies
- Crypto-asset service providers and ART issuers (under MiCA)
- Insurance and reinsurance undertakings
- Credit rating agencies
- Crowdfunding service providers
- Statutory auditors and audit firms
Critically, ICT third-party service providers serving these entities are also in scope - especially “critical” ones designated by regulators, who face direct EU oversight. If you’re a cloud provider, payments processor, or SaaS vendor serving EU financial institutions, DORA applies to you.
Smaller entities get a simplified regime (proportionality principle), but you’re still in scope. Penalties: fines up to €10M or 2% of annual turnover, personal liability for executives.
DORA isn’t about specific products or transactions. It’s about making sure your technology doesn’t break and if it does, you can recover quickly. The UK has similar requirements (PRA/FCA operational resilience) but not a single regulation like DORA.
What DORA Requires
DORA has five main pillars.
1. ICT Risk Management
You need a comprehensive framework for managing technology risk:
Risk identification: Document all ICT systems, where they’re hosted, what they do, dependencies between them. Not just production - also development, testing, data analytics environments.
Risk assessment: Evaluate impact if each system fails (which customers affected, which services disrupted, financial impact). Assign risk levels.
Risk mitigation: Controls to prevent failures (redundancy, backups, monitoring, testing). Document why the controls are appropriate for the risk level.
Governance: Board oversight of ICT risk, clear ownership and accountability, regular reporting on ICT risk to senior management.
This sounds basic, but many fintech companies don’t have this documented. They know it informally but haven’t written down “if our payment processing service goes down, 100K customers can’t transact, revenue impact is €500K/day, mitigation is active-active deployment across two cloud regions.”
2. Incident Reporting
When things break, you need to tell regulators. Fast.
Incident classification - Categorize incidents by severity (major, significant, minor). Major incidents get reported within 4 hours.
Initial notification - Within 4 hours of detecting a major incident, notify your regulator with basic info (what happened, impact, initial response).
Intermediate report - Within 72 hours, provide detailed analysis (root cause if known, affected services, customer impact, remediation steps).
Final report - Within 1 month, complete incident report with root cause analysis, lessons learned, actions to prevent recurrence.
The 4-hour clock is brutal. Incident detected at 2 AM? Someone needs to be awake and filing reports by 6 AM. This means:
- 24/7 on-call rotation
- Playbooks for incident detection and classification
- Templates for regulatory notifications
- Escalation procedures that actually work at 3 AM
3. Testing
You need to test that your systems actually work and can recover:
Testing frequency:
- Vulnerability assessments: At least annually
- Penetration testing: Every 3 years, or after major changes
- Scenario-based testing: Regular (at least annually) simulations of outages, attacks, etc.
Threat-Led Penetration Testing (TLPT): For significant entities (large banks, critical infrastructure), advanced penetration testing simulating real attacker tactics. This isn’t automated vulnerability scanning - it’s hiring red teams to try to break in using nation-state-level techniques.
Business continuity testing: Actually fail over to your disaster recovery environment and verify services work. Not just “the failover script runs,” but “customers can still make payments using the DR environment.”
Most startups do some of this ad-hoc. DORA requires documentation, regular cadence, and evidence that you acted on findings.
4. Third-Party Risk Management
If critical services depend on third parties (cloud providers, payment processors, SaaS tools), you need to manage that risk:
Identify critical third parties: Which vendors could disrupt your business if they fail? Cloud hosting (AWS, GCP, Azure) is obviously critical. Payment rails (SWIFT, card networks) are critical. Is your monitoring system critical? Depends how much you rely on it for incident response.
Contractual requirements: Contracts with critical third parties must include:
- Right to audit their security controls
- Incident notification requirements
- Data access rights (you can get your data out if they fail)
- Termination rights if they don’t meet security standards
- Subcontracting restrictions (they can’t outsource critical functions without your approval)
Concentration risk: If you depend heavily on one cloud provider (everything in AWS), regulators will ask about your exit strategy. You don’t need to be multi-cloud, but you need a documented plan for migrating if AWS terminates your account or has extended outages.
Register of third parties: Maintain a register of all ICT third-party providers, especially critical ones. Update it regularly. Regulators can ask to see it any time.
This hits fintech hard because modern startups depend on dozens of third parties. Stripe for payments, AWS for hosting, Auth0 for authentication, Twilio for SMS, SendGrid for email, Datadog for monitoring - each is a potential point of failure.
5. Information Sharing
Financial entities should share information about cyber threats and vulnerabilities. This is mostly about participating in industry information-sharing groups (ISACs), not building specific technology.
RTO, RPO, and Reality
RTO/RPO Targets You Must Meet
DORA doesn’t specify exact RTO (Recovery Time Objective) and RPO (Recovery Point Objective) numbers, but regulators expect you to define them based on business impact.
For critical services (payment processing, customer account access):
- RTO: Typically 2-4 hours maximum
- RPO: Typically 5-15 minutes maximum
This means if your payment system goes down completely, you need to recover within 2-4 hours. And when you recover, you can’t have lost more than 15 minutes of transaction data.
How you achieve this:
High availability architecture:
- Active-active deployment across multiple availability zones
- Load balancing with automatic failover
- Database replication with synchronous or near-synchronous replication
- Stateless services that can be redeployed quickly
Backup and recovery:
- Continuous or near-continuous backups (not daily snapshots)
- Backups stored in different region than primary
- Regular testing of backup restoration (quarterly at minimum)
- Documented recovery procedures
Data durability:
- Transaction logs that can be replayed
- Idempotency to prevent duplicate processing during recovery
- Reconciliation to detect lost transactions
Most cloud-native fintechs achieve this with:
- Kubernetes for container orchestration
- Managed databases with replication (RDS Multi-AZ, Cloud SQL with replicas)
- Message queues for async processing (SQS, Pub/Sub) that durably store messages
- Object storage for audit logs and backups
Incident Logging and Reporting Automation
Manual incident reporting doesn’t scale when you need reports filed within 4 hours. You need automation:
Incident detection:
- Monitoring alerts that automatically classify severity
- PagerDuty/Opsgenie integration that escalates to compliance team
- Dashboard showing system health and incident status
Automated report generation:
- Template system that pulls data from monitoring, logs, and incident tracking
- One-click export to regulatory format
- Pre-filled with system info, affected services, customer impact estimates
Workflow automation:
- Incident ticket automatically creates regulatory reporting task
- Compliance team gets notification with draft report
- Review and submit within 4-hour window
The goal: engineers focus on fixing the problem, automation handles regulatory notification.
Third-Party Vendor and Service Mapping
You need a system of record for all third-party dependencies:
Vendor registry fields:
- Vendor name and contact info
- Services provided
- Criticality level (critical, important, non-critical)
- Contract expiration date
- Last security assessment date
- Subcontractors they use
- Data they can access
- Geographic location of services
For microservices architectures, this gets complex. Your services depend on:
- Cloud provider (AWS/GCP/Azure)
- Managed services (databases, caches, queues, storage)
- SaaS tools (auth, payments, communications)
- Open-source libraries (indirectly, supply chain risk)
You need tooling that:
- Auto-discovers dependencies (service mesh, API gateways can help)
- Tracks which services depend on which vendors
- Alerts when contracts are expiring
- Supports audit by showing dependency graph
Penetration Testing and Resilience Testing Cadence
DORA requires regular testing, so build it into your calendar:
Annual cycle:
- Q1: Vulnerability assessment (automated scanning + manual review)
- Q2: Business continuity exercise (failover testing)
- Q3: Penetration test (external + internal)
- Q4: Scenario testing (simulate specific attack or outage scenarios)
After major changes: Any significant architecture change, new product launch, or third-party integration should trigger security assessment.
Documentation: Every test needs a report with:
- What was tested
- What was found
- Severity of findings
- Remediation plan with timeline
- Evidence of remediation completion
Regulators will ask to see historical test reports. Keep them for at least 5 years.
Conclusion
If you made it this far, congratulations - you now know more about fintech regulations than most people who work in fintech. The bad news: none of these regulations exist in isolation. Your AML data retention will conflict with GDPR deletion requests. Your EMI license requires AML controls. Your crypto platform needs MiCA, Travel Rule, and probably DORA. It’s regulations all the way down.
The good news: once you’ve built compliance into your systems properly, it becomes background noise rather than a constant fire drill. The companies that treat compliance as a product feature - not an afterthought - are the ones that scale without regulators knocking on their door.
Now go build something. Just make sure you can delete it when someone asks. Or not, if the law says you can’t.



