When evaluating cloud security services, security leaders should assess cloud environment coverage breadth, detection approach (behavioral vs. signature vs. AI-powered), response capabilities, analyst expertise in cloud-native threats, integration with existing tools, and SLA commitments for detection and response times.
Five things to know before evaluating cloud security services:
- CSPM and CDR solve different problems—a vendor offering only posture management is not a cloud security service
- Coverage breadth matters: ask specifically which cloud services are monitored within each platform, not just which platforms
- Human expertise and 24×7 availability are what separate MDR from automated tools—ask how alerts are investigated, not just detected
- MITRE ATT&CK coverage is the most concrete way to assess detection quality—ask for a coverage map
- SLA benchmarks should be specific: MTTD and MTTR targets per severity tier, not vague “rapid response” claims
Criterion 1: Cloud environment coverage breadth
Cloud security is a broad discipline, and coverage breadth is the first filter for evaluating any cloud security service. Ask specifically: which cloud platforms are covered (AWS, Google Cloud, Azure, Kubernetes, SaaS), and within each platform, which services are monitored?
The second question matters more than the first. A vendor claiming AWS coverage might monitor EC2 and S3 but miss Lambda, EKS, or CloudTrail events from AWS Organizations management accounts. A vendor claiming Kubernetes coverage might analyze pod-level events but miss control plane audit logs. Coverage claims require specificity.
Key questions to ask:
- Which specific AWS, Google Cloud, and Azure services generate log sources you ingest?
- Do you cover Kubernetes audit logs in addition to workload-level telemetry?
- Which SaaS applications are in scope, and which audit log types do you ingest from each?
- How do you handle multi-cloud environments—unified detection or separate per-provider coverage?
Criterion 2: Detection approach
Not all cloud security detection is equivalent. The key distinction is between configuration scanning (CSPM) and runtime behavioral detection (CDR), and the two should not be confused.
CSPM tools identify misconfigurations: exposed storage buckets, overly permissive IAM policies, unencrypted databases. They’re valuable for posture management but don’t detect active threats. A vendor whose “detection” is primarily CSPM findings is not a cloud detection and response service, it’s a configuration audit tool.
Behavioral detection identifies attacker activity in real time by analyzing cloud telemetry against baselines and known attack patterns. AI-powered detection adds the ability to identify anomalous patterns that rule-based logic wouldn’t catch.
Ask vendors specifically:
- What percentage of your detections are behavioral vs. signature/rule-based?
- How do you detect attacks that use valid credentials (the most common cloud attack technique)?
- What is your false positive rate in production environments outside of demos?
Criterion 3: response speed and SLA
Detection without response is incomplete. Evaluate what happens after a threat is detected, such as how quickly it’s investigated, and what actions the provider can take.
Industry SLA benchmarks for leading cloud security services:
- MTTD (mean time to detect): Under 15 minutes for high-severity cloud alerts
- MTTR (mean time to respond): Under 60 minutes for confirmed incidents
Ask vendors for their actual production metrics, not just theoretical SLAs. Providers who publish their real-world performance data (not just contractual commitments) are demonstrating confidence in their operations. Vague commitments like “rapid response” or “same-day investigation” without specific time targets should prompt follow-up questions.
Also clarify: does “response” mean the provider takes containment actions, or does it mean they send you a notification? The difference between a provider who can suspend a compromised account automatically and one who notifies you to suspend it yourself is operationally significant, especially outside business hours.
Criterion 4: Human expertise and 24×7 availability
Cloud attacks don’t respect business hours. Credential compromise followed by resource abuse can escalate significantly within hours, making 24×7 coverage a meaningful requirement, not a premium feature.
Beyond availability, evaluate the depth of cloud-native expertise.
Ask:
- Are your analysts trained specifically on cloud attack patterns (AWS, Google Cloud, Azure), or do they primarily handle endpoint investigations?
- How many analysts are available during nights and weekends, and what is the investigative workflow outside business hours?
- Can you describe a recent cloud incident investigation, including what the detection was, how it was investigated, and what response action was taken?
Concrete examples of past cloud investigations are one of the most reliable ways to assess actual cloud expertise. Vendors who can describe specific CloudTrail investigation workflows, RBAC escalation investigations, or container escape response scenarios have real operational experience. Vendors who give generic answers don’t.
Criterion 5: Tool integration
Cloud security services should integrate with your existing security stack, not require you to rip and replace it. Key integration questions:
- Does the service work with your existing SIEM, or does it require its own data platform?
- Which cloud-native tools do they integrate with (GuardDuty, Security Command Center, Defender for Cloud, Wiz, Lacework)?
- How are findings surfaced? In the provider’s own platform, in your existing SIEM, or both?
- What does the API look like for programmatic integration with your incident response workflows?
Criterion 6: MITRE ATT&CK coverage
MITRE ATT&CK coverage is the most concrete, verifiable way to assess cloud detection quality. Ask any potential provider to map their cloud detection content to the MITRE ATT&CK cloud matrix, specifically to show which techniques they have detection coverage for across AWS, Google Cloud, and Azure.
Providers with genuine detection depth will be able to produce a coverage map and discuss which specific detections cover which techniques. Providers who can’t answer this question specifically, or who claim to “cover all ATT&CK techniques” without being able to demonstrate the specifics, are likely overstating their detection capability.
Pay particular attention to coverage for high-priority techniques: T1078 (Valid Accounts), T1537 (Transfer Data to Cloud Account), T1098 (Account Manipulation), and T1580 (Cloud Infrastructure Discovery).
Criterion 7: Compliance support
Cloud security services are often purchased in part to support compliance requirements such as SOC 2, PCI DSS, HIPAA, and FedRAMP.
Evaluate:
- Which compliance frameworks does the service support?
- What documentation and reporting does the service provide for audit purposes?
- Does the service provide evidence of continuous monitoring, which many frameworks require?
- Are there any data residency or data handling requirements that the service’s architecture must accommodate?
Compliance support shouldn’t be the primary evaluation criterion. A service that meets compliance checkboxes but provides weak detection isn’t actually reducing risk. But it’s a meaningful secondary consideration for organizations with regulatory obligations.
Criterion 8: Cost model
Cloud security service pricing varies significantly. Common models include:
- Per-asset pricing: Based on number of cloud accounts, endpoints, or workloads monitored
- Data volume pricing: Based on log ingestion volume
- Outcome-based or flat-rate: Fixed pricing based on scope of coverage
Ask vendors for total cost of ownership including any fees for additional data sources, increased log volumes, or additional cloud platforms added over time. Cloud environments grow, so cost models that scale steeply with usage can produce budget surprises.
Also evaluate the build vs. buy vs. manage trade-off. The true cost of self-managed cloud security includes engineering time for detection content development, analyst time for 24×7 coverage, and tool costs for each cloud platform’s monitoring. Managed services often provide better per-dollar security outcomes than equivalent internal investment, particularly for organizations without dedicated cloud security engineering teams.
Frequently asked questions
What should I look for in a cloud security service provider?
Start with coverage and detection quality, not brand or price. Coverage breadth means asking specifically which cloud services within each platform are monitored, not just whether the provider supports AWS or Google Cloud. Detection quality means understanding whether behavioral detection is in place for credential-based attacks (the most common cloud threat), not just whether CSPM findings are surfaced. From there, human expertise and 24×7 availability determine whether threats are contained quickly or sit unresponded to overnight. Ask vendors for real-world metrics like actual MTTD and MTTR in production (not just contractual SLAs) and ask for examples of recent cloud incidents they’ve investigated. The specificity of their answers tells you more than any feature matrix.
What is the difference between CDR and CSPM vendors?
CSPM vendors focus on identifying configuration risks and compliance gaps. They scan your cloud environment and tell you what’s misconfigured. CDR vendors detect active threats and respond to incidents in real time. They watch your cloud environment continuously and alert when an attacker is actively operating. Both address real security needs, but they’re often confused because they both claim to address “cloud security.” The distinction matters when evaluating: a vendor whose cloud security service is primarily CSPM is not providing runtime threat detection. You need both layers—CSPM to manage your attack surface and CDR to detect when it’s being exploited.
How do I evaluate cloud security service coverage?
Ask vendors to map their detection capabilities to MITRE ATT&CK for cloud. This is the most concrete, verifiable way to assess coverage depth. Then ask specifically which cloud services within each platform generate log sources they ingest. “We support AWS” is a much weaker coverage claim than “we ingest CloudTrail, VPC Flow Logs, GuardDuty findings, EKS audit logs, and Lambda function logs.” Ask for a technical conversation with their cloud detection engineering team, not just a sales presentation. The depth of their answers to specific technical questions about CloudTrail investigation workflows or Google Cloud audit log analysis will tell you more than any feature checklist.
What cloud security services does Expel offer?
Expel provides cloud detection and response as part of its MDR platform, covering AWS, Google Cloud, Microsoft Azure, Kubernetes, Oracle, and SaaS with 24×7 human-led monitoring, AI-powered detection, and automated response. Expel publishes its actual operational metrics publicly.
What SLAs should I expect from a cloud security service?
Industry benchmarks include mean time to detect (MTTD) under 15 minutes for high-severity cloud alerts and mean time to respond (MTTR) under 60 minutes. Ask vendors for their actual production metrics—not just contractual commitments—and clarify what “respond” means in their SLA: does it mean an analyst has begun investigating, that you’ve been notified, or that a containment action has been executed? The difference is operationally significant, particularly for incidents that escalate quickly.
