This article explores SOC performance efficiency and where to find measurable data for your security operations center (SOC). The article features insights from a video interview with Ben Brigida and Ray Pugh, SOC operations leaders at Expel.
The complete interview can be found here: How to measure a SOC
Measuring SOC performance efficiency shouldn’t be an overwhelming exercise in data collection. The most effective measurement programs start small, focus on outcomes that genuinely matter, and build momentum through incremental wins rather than attempting perfect visibility from day one.
When security leaders ask where to find measurable data for their SOC, the answer depends less on data availability and more on clarity about what success looks like. Organizations that define meaningful outcomes first—and then identify measurements that inform progress toward those outcomes—build more valuable metrics programs than those that measure everything available simply because they can.
The crawl, walk, run approach to SOC performance efficiency
The journey toward comprehensive SOC measurement begins with accepting that your current state is exactly where you should start. Rather than delaying metrics implementation until ideal conditions exist, successful programs start with whatever they can measure reliably today.
This “crawl, walk, run” methodology acknowledges that momentum matters more than perfection. Even limited initial metrics provide learning opportunities that inform future measurement expansion. Organizations that trend a small subset of data over time discover patterns and insights that wouldn’t be apparent from attempting immediate comprehensive measurement.
Starting small also prevents the analysis paralysis that frequently stalls metrics initiatives. When faced with dozens of potential measurements, teams often struggle to prioritize or become overwhelmed by implementation complexity. Focusing on a few foundational metrics creates achievable goals that build confidence and organizational support for expanded measurement.
The initial metrics subset should always connect directly to outcomes you believe indicate team success, efficiency, and quality. If you can’t articulate why a specific metric matters or what decisions it would inform, that measurement probably doesn’t belong in your initial implementation.
Industry standard SOC metrics that provide baseline visibility
While every organization’s specific measurement needs differ based on their security environment and operational priorities, certain metrics have become industry standards because they provide fundamental visibility into SOC performance efficiency.
Mean time to detect (MTTD) measures how quickly your SOC identifies threats after they occur in your environment. This metric directly relates to threat containment effectiveness—faster detection typically means less time for attackers to accomplish their objectives. Organizations can find this data by comparing initial compromise timestamps (determined through investigation) with detection timestamps from security tools.
Mean time to respond (MTTR) tracks how long it takes teams to begin taking action once a threat is detected. This metric reveals bottlenecks in the response process and helps identify where workflow improvements or automation might accelerate containment. MTTR data comes from ticketing systems that timestamp when incidents are created and when response actions begin.
Time to decision and time to triage provide more granular visibility into the alert lifecycle. How long does it take for an analyst to begin examining an alert? How long to determine whether it represents genuine malicious activity? These measurements help identify whether analysts have the context and tools they need to make confident decisions quickly. This data typically exists in SIEM platforms or SOC workflow management systems that track analyst interactions with alerts.
Work time measurement enables capacity calculation—a frequently overlooked but crucial metric for sustainable SOC operations. Tracking how long analysts actually spend working on alerts, investigations, and incidents reveals whether teams operate within healthy capacity limits or risk decision fatigue from overloading.
Understanding the decision fatigue principle in SOC metrics
Security analysis is fundamentally a decision-intensive job. Analysts make thousands of decisions daily as they triage alerts, investigate suspicious activity, and respond to incidents. This cognitive load creates real limitations that metrics must account for.
Decision fatigue occurs when the volume of decisions required exceeds an analyst’s cognitive capacity. Understanding this limitation is essential for maintaining SOC performance efficiency.Quality deteriorates not because analysts lack skill or effort, but because the human brain has finite decision-making resources. You cannot simply ask teams to work harder when alert volumes spike—that approach predictably leads to missed detections and burned-out analysts.
The Kingman equation, sometimes called the “BUT curve,” quantifies this relationship mathematically. Research shows that when analyst loading exceeds approximately 70% of available capacity, work time increases exponentially while decision quality declines precipitously. This means organizations need visibility into actual workload to identify when teams approach dangerous capacity thresholds.
Finding this data requires tracking not just how many alerts analysts process, but how much actual work time those alerts consume. Some alerts require thirty seconds of triage; others demand thirty minutes of investigation. Understanding this distinction enables leaders to calculate realistic capacity and identify when additional staffing or automation becomes necessary to maintain quality.
Measuring what matters: Outcome-focused metrics selection
The most valuable SOC performance efficiency metrics programs resist the temptation to measure everything measurable and instead maintain disciplined focus on measurements that inform important decisions.
This requires asking tough questions about every proposed metric: What outcome does this measurement relate to? What decisions would this data inform? If the metric changed significantly, what action would we take? Measurements that don’t have clear answers to these questions often create noise without providing insight.
Organizations should be opinionated about what constitutes success, efficiency, and quality in their SOC operations. These opinions drive metric selection—if you believe rapid threat containment matters most, emphasize detection and response timing. If analyst retention concerns you, track workload and capacity metrics closely. If detection accuracy represents your priority, focus on false positive rates and missed detection analysis.
However, being opinionated doesn’t mean being rigid. The most successful metrics programs combine strong initial hypotheses with eagerness to adjust when data reveals unexpected patterns. If metrics suggest one conclusion but operational reality demonstrates another, that discrepancy demands investigation and likely metric refinement.
The iterative nature of effective SOC metrics programs
SOC metrics programs never reach a “complete” state—they evolve continuously as threats change, team capabilities mature, and organizational requirements shift. Accepting this reality from the start prevents the perfectionism that delays valuable measurement.
Getting into the data and learning from it represents the only path to metrics program maturity. Organizations cannot design perfect measurement systems in conference rooms; they must implement initial metrics, observe what those measurements reveal, identify gaps or limitations, and continuously refine their approach based on accumulated experience.
This iterative approach applies to both what you measure and how you interpret measurements. Early metrics implementations frequently reveal that certain measurements don’t provide expected insights or that data quality issues limit reliability. These discoveries inform the next iteration rather than indicating failure.
The learning that happens through hands-on data analysis cannot be replicated through theoretical planning. When teams begin examining actual metrics, they discover patterns, relationships, and anomalies that suggest new measurement opportunities or reveal limitations in current approaches. This practical engagement with data drives continuous improvement.
Finding data sources within your existing security infrastructure
Most organizations already have access to measurement data through existing security and operational tools—they simply need to identify and extract it systematically. These data sources form the foundation for tracking SOC performance efficiency over time.
SIEM platforms contain timestamps for alert creation, analyst interactions, and status changes that enable calculation of detection timing, triage duration, and investigation length. Ticketing systems track when incidents are created, when analysts begin response actions, and when incidents are resolved—providing MTTR data and workflow visibility.
Endpoint detection and response (EDR) tools log when threats are first detected on systems and when containment actions are executed. Network security tools provide similar visibility into network-based threat detection and response. Identity and access management systems reveal when suspicious authentication activity is detected and when access is revoked.
The challenge isn’t finding data sources but rather aggregating and analyzing data from disparate systems in meaningful ways. This often requires some technical implementation work—extracting data from various tools, normalizing timestamps and formats, and creating dashboards or reports that make metrics accessible to decision-makers.
Organizations with limited resources should resist the temptation to delay metrics until perfect data aggregation exists. Even manually collected metrics from a few key sources provide more value than no metrics at all. Automation and integration can evolve over time as the metrics program matures.
Building SOC performance efficiency through incremental measurement wins
The most sustainable path to comprehensive SOC measurement involves celebrating small wins that build organizational confidence and support for expanded metrics programs.
Starting with one or two foundational metrics that provide clear value demonstrates the benefits of measurement without overwhelming teams with implementation complexity. As these initial metrics prove useful for decision-making, stakeholders naturally become more receptive to expanding measurement scope.
Each measurement iteration should build on previous learning. Initial metrics might reveal bottlenecks that suggest additional measurements would provide useful context. Early data might expose quality issues in certain data sources, informing infrastructure improvements that enable more reliable measurement later.
This incremental approach also allows organizations to develop analytical capabilities alongside measurement expansion. Teams learn to interpret metrics, identify patterns, and translate data into actionable insights through practical experience with initial measurements. These skills become increasingly valuable as metrics programs grow in sophistication.
Resources for SOC metrics implementation
Organizations building SOC performance efficiency programs can benefit from additional resources and industry guidance:
- Performance metrics, part 1: Measuring SOC efficiency explores foundational metrics like alert latency, capacity planning, and the 95th percentile measurement approach
- Performance metrics, part 2: Keeping things under control addresses how to use data to spot potential SOC analyst burnout and maintain operational control
- Performance metrics, part 3: Success stories shares real-world applications of measurement frameworks and lessons learned
- How to measure SOC quality discusses quality control approaches that support learning cultures and inspection methodologies
- SOC metrics dashboard tool provides a free downloadable resource to track key KPIs and analyst performance
- How to measure SOC efficiency features video insights from SOC operations leaders on balancing speed and quality in measurement
- What frameworks and tools drive security maturity? examines how organizations use frameworks like NIST CSF to define and measure SOC functions
The success of SOC metrics programs ultimately depends on starting with clear outcomes, measuring what you can today, and continuously refining your approach based on what the data reveals. Perfect visibility isn’t the goal—useful insights that inform better security decisions are what matter most.
