API and endpoint security
We review authentication, rate limiting, input validation, output filtering and network access controls for inference APIs — whether you're running your own models or calling third-party LLM providers.
LLM APIs, ML pipelines, model storage, inference endpoints and training infrastructure all introduce attack surface. We assess and harden the systems and controls around your AI workloads — applying standard cloud and infrastructure security discipline to ML-specific environments.
We review authentication, rate limiting, input validation, output filtering and network access controls for inference APIs — whether you're running your own models or calling third-party LLM providers.
We assess security across your data ingestion, training and serving pipelines — including access controls, supply chain integrity, dependency management and privilege separation between pipeline stages.
We review what AI interactions are logged, where logs go, how long they're retained, and whether you have the visibility needed to detect prompt injection, data exfiltration or model misuse.
Organisations running AI workloads on cloud infrastructure — whether fine-tuned models, self-hosted open-source models, or integrated LLM APIs. Particularly relevant for technology companies and AI-enabled product teams with MLOps environments.
2–4 weeks depending on infrastructure complexity. Environments with multiple cloud providers or custom MLOps pipelines may require additional scope.
We'll assess what you're running and what controls make sense for your environment and threat model.