← Back to services // Offensive

AI Implementation Pentest

Prompt injection, model manipulation and AI supply chain risks

€175 /hour

AI systems introduce new attack vectors. We test your LLM integrations, AI agents and machine learning pipelines for prompt injection, data exfiltration, model manipulation and unintended information leaks. From chatbots to automated decision systems.

What is an AI pentest?

An AI pentest is a security assessment aimed at the AI and machine learning components in your applications. More organisations are integrating LLMs, AI agents and ML pipelines into their products. This creates attack vectors that traditional pentests do not cover: prompt injection, model manipulation, data exfiltration via the AI, unintended information leaks.

We test your AI implementations against the OWASP Top 10 for LLM Applications and go further. We investigate the full chain: user input to model output, including system prompts, tools/function calling, RAG pipelines and the data the model can access.

Common AI vulnerabilities

  • Prompt injection - direct and indirect prompt injection to make the AI ignore instructions or perform malicious actions.
  • System prompt extraction - reverse engineering the system prompt, including secret instructions and internal logic.
  • Data exfiltration - manipulating the AI to leak sensitive data from the knowledge base or database.
  • Jailbreaking - bypassing safety measures to generate unwanted content.
  • Tool/function abuse - manipulating AI agents that call tools to perform unauthorised actions.
  • Training data poisoning - for custom models: can the training data be manipulated?

Why should you get an AI pentest?

AI security risks are structurally underestimated:

  • AI has access to sensitive data: many implementations access customer data, internal documents or business systems via RAG or function calling.
  • AI attacks are new: most developers have no experience with prompt injection. We do.
  • EU AI Act: regulation imposes requirements on AI system security. A pentest helps demonstrate compliance.
  • Reputational risk: an AI chatbot that leaks sensitive data is instant front-page news.

Our approach

We combine AI knowledge with offensive security experience. We are researchers who enjoy playing with new technology - and breaking it:

  • Scoping - which model, what data, which tools, what threat model? Direct conversation with the researcher.
  • System prompt analysis - we attempt to extract the system prompt. This often reveals internal logic and security measures.
  • Prompt injection testing - direct and indirect injection, multi-turn attacks, encoding bypasses. Systematic and creative.
  • Data access testing - accessing data via the AI that the user should not see. Other users, internal documents, system configuration.
  • Tool/function abuse - if the AI can call tools, we test whether we can abuse them. APIs, databases, file systems.
  • Output analysis - PII leaks, hallucinations containing confidential data, unwanted content.
  • Reporting - findings with example prompts, outputs and concrete hardening recommendations. Retest available on request.

What does an AI pentest cost?

Our hourly rate for AI pentesting is €175 per hour. Indications:

  • Chatbot implementation: €5,000 - €12,000
  • AI agent with tools, RAG and database access: €12,000 - €25,000
Costs depend on complexity, number of AI components and data sensitivity. Fixed quote after scoping call.

Methodology

1

Scoping

Inventarisatie van AI-componenten, modellen, data flows en integratiepunten.

2

Prompt Analysis

Testen op prompt injection, jailbreaks en system prompt extractie.

3

Data Flow Testing

Onderzoek naar data exfiltratie, PII-lekken en onbedoelde informatiedeling.

4

Rapportage

Rapport met bevindingen, risicoclassificatie en concrete mitigaties.

Frequently asked questions

Which AI models can you test?

GPT-4, Claude, Gemini, Llama, Mistral - the model does not matter. The vulnerabilities are in the implementation: system prompts, tooling, RAG configuration. Not in the model itself.

Is an AI pentest different from a regular web application pentest?

Yes. AI-specific attacks - prompt injection, model manipulation, data exfiltration via the AI - are not covered by a standard web application pentest. Does your application contain AI components? Then we recommend both tests.

Can you also test internal AI tools?

Yes. Internal AI tools for HR, finance or operations often have more access to sensitive data than a public chatbot. Those are particularly interesting to test.

How new is the field of AI pentesting?

Young but growing. We actively follow developments, conduct our own research and publish about it. The OWASP Top 10 for LLM Applications forms the basis of our methodology, but we go beyond the checklist.

Ready to test your security?

Get in touch with our team for a no-obligation conversation about your security challenges.