SRE
Microservices

Why and how to use site reliability golden signals

Engineers use SRE metrics to benchmark and improve the reliability and performance of systems and services. Learn more about the 4 golden signals (latency, errors, traffic, saturation).

By
Cortex
-
April 1, 2024

Software complexity makes it harder for teams to rapidly identify and resolve issues. IT service management has evolved from an afterthought to a central part of DevOps. Microservices architectures are prone to delay or missed identification of such concerns.

Monitoring mechanisms need to keep up with these complex infrastructures. Maintaining reliability and performance while harnessing this complexity requires a considered, data-driven approach. Enter site reliability engineering (SRE), which bridges the gap between development and operations. In this article, we dive into an essential component of the modern software development workflow: SRE metrics. We explain how you can make them work for your teams.

What is site reliability engineering (SRE)?

Site Reliability Engineering (SRE) is the application of software engineering principles to the design, development, and daily operation of large IT systems. The fundamentals of SRE are about finding the root cause of problems, and applying automation and observability in operations to solve them at scale.

Another important aspect is maintaining the daily health of systems. Typical tasks for SRE engineers might include testing the production environment, defining service level indicators, or working on ongoing tasks like continuous improvement or important, non-urgent work like disaster recovery planning.

The concept of SRE emerged at Google back in 2003 under Ben Treynor, who was tasked with leading an engineering team and keeping sites reliable. He directed colleagues to spend half their time on development and the other half on development, ensuring that engineers understood how their software was deployed. In the process he invented site reliability engineering, defining it as “what happens when you ask a software engineer to design an operations team.”

For a more detailed definition of SRE, see this blog.

The core components of site reliability engineering

  • Monitoring: Monitoring forms an important basis for SRE, providing the visibility needed to understand the state of systems and services in real time. Effective monitoring strategies enable early detection of issues, trend analysis for capacity planning, and insights into system health.
  • Availability: Availability is a central consideration in site reliability engineering because keeping systems operational and accessible to users is SRE’s most important goal. Poor availability directly impacts customer satisfaction and can influence the reputation and financial performance of a business.
  • Performance: Performance is another crucial consideration, as it affects the user experience and the efficiency of operations. Optimizing performance involves ensuring that applications respond quickly to requests and can handle high volumes of transactions without degradation. This also ensures efficient resource use, contributing to the scalability and reliability of the system.
  • Emergency Response: Emergency response is vital in SRE to address incidents and outages swiftly and efficiently. Having a structured emergency response plan, including incident command protocols and clear communication channels, enables teams to quickly identify, diagnose, and mitigate issues. This minimizes downtime, reduces the impact on users, and ensures that lessons learned are incorporated into future resilience planning.

In relation to broader developer productivity philosophy, site reliability engineering is downstream of the broader DevOps movement of the last two decades. It is best viewed as the subset of DevOps dealing with key business metrics like the above. As such, it factors into the evolving role of developer experience as documented by Cortex's own Justin Reock.

Justin makes the point that some SRE responsibilities, such as learning Docker and Kubernetes, has shifted to developers, without developer self-service solutions keeping pace. Poor resourcing risks raising cognitive load across the board, turning developers into subpar SRE engineers while making SREs focused on reactive tickets instead of proactively improving systems.

Avoiding this cascading failure means providing resources for SREs alongside a clear plan to improve reliability, minimize downtime, and enhance customer experience. That starts with rock solid metrics.

What are SRE metrics and why are they important?

SRE metrics are the quantitative measures used to assess the availability and efficiency of software systems and services. They provide hard data that can help to improve service reliability, minimize downtime and enhance user experience. Good metrics should bridge the divide between product development and operations teams to rapidly deliver quality software in alignment with the operations teams' reliability criteria and error margins.

Working with development teams and operations experts to define and target key metrics is the first step in building a culture that prioritizes reliability and stability in the software development and delivery processes. These should address SRE considerations across the lifecycle, from performance issues with new features to monitoring tools used in incident response.

SRE metrics help developers and operations teams align on expectations and delivery. Well-defined metrics work as catalysts for communication between team members. Since everyone has access to them, they can together make appropriate decisions to resolve an incident or further improve services.

What are the four golden signals of SREs?

The key metrics most commonly associated with SRE are the four golden signals. These are generally used when monitoring your services and identifying any issues in the application's performance. Inspecting your services in this way helps you prioritize concerns and effectively resolve them to maintain the health of your software.

Latency

  • Definition: Latency measures the time it takes for a system to process a request and return a response. It is a critical indicator of the responsiveness of a service.
  • How it’s measured/tracked: Latency is typically tracked by logging the start and end times of transactions or requests and calculating the difference. Tools like APM (Application Performance Monitoring) solutions, network monitoring tools, and custom logging within the application itself are commonly used for this purpose.
  • What does this metric reveal about the overall system/performance of a service? High latency can indicate bottlenecks or inefficiencies within the system, such as slow database queries, inadequate resource allocation, or poor load balancing. It directly impacts user experience, making it a vital metric for maintaining service quality.

Traffic

  • Definition: Traffic measures the volume of requests or data that a service handles over a given period. It reflects the demand or load placed on the system.
  • How it’s measured/tracked: Traffic is tracked by counting the number of requests to a service, analyzing log files, or using network monitoring systems that can quantify the amount of data passing through.
  • What does this metric reveal about the overall system/performance of a service? Analyzing traffic patterns helps in understanding user behavior, peak usage times, and potential stress points in the system. It is essential for capacity planning and ensuring the system can scale to meet demand.

Errors

  • Definition: Errors represent the rate or number of failed requests within a system, indicating issues that prevent the service from functioning correctly.
  • How it’s measured/tracked: Error rates are tracked by analyzing response codes (e.g., 4xx and 5xx HTTP status codes), logging exceptions, or using application monitoring tools that can detect and categorize failures.
  • What does this metric reveal about the overall system/performance of a service? A high error rate can point to problems in the code, infrastructure issues, or external dependencies failing. It provides insight into the stability and reliability of the service, directly affecting user satisfaction.

Saturation

  • Definition: Saturation measures the degree to which a resource (CPU, memory, bandwidth) is utilized. It indicates how "full" a service or system is and its remaining capacity to handle additional load.
  • How it’s measured/tracked: Saturation is monitored by using system monitoring tools that track the utilization levels of various resources. Metrics such as CPU load, memory usage, and network throughput are common indicators of saturation.
  • What does this metric reveal about the overall system/performance of a service? Saturation levels give early warnings about potential performance degradation or system failures due to overload. It helps in identifying when scaling is necessary to maintain optimal performance and avoid downtime.

Getting these metrics right also involves working directly with developers and empowering them with some of the SRE responsibilities. This typically takes the form of Service Level Objectives (SLOs), Service Level Agreements (SLAs), and Service Level Indicators (SLIs) to measure how their operations are functioning. SLO is a numerical target set to measure system availability using which you can calculate the error budget, tracking outages in line with the error budget.

To enforce the set objectives, companies make a promise to their consumers indicating that they will meet the service availability targets and that promise is called Service Level Agreement (SLA). Similarly, Service Level Indicators demonstrate if the application performance is better or worse than the target value.

Best practices for measuring and improving SRE metrics

Using data to improve performance isn’t a surprising suggestion for software developers, and unfortunately there is no easy way to implement SRE metrics perfectly from the off. It helps to consider Goodhart’s Law when selecting metrics and consider the tradeoffs and second-order effects of optimizing for certain KPIs.

It’s worth emphasizing that data collected needs to be accurate and available proportionate to the job you are doing. This gets harder when you are tracking metrics across distributed architectures or using microservices and multiples environments, but this complexity makes SRE metrics even more important.

Consider the below best practice when implementing SRE metrics.

Define clear objectives and priorities

Defining clear objectives and priorities for your SRE metrics is the essential first step. By establishing what success looks like in terms of system reliability and performance, you can identify which metrics to track and manage, ensuring that resources are focused on areas with the greatest impact on your company’s strategic goals, whether that is service quality and user satisfaction.

Establish comprehensive monitoring

Comprehensive monitoring is a cornerstone of effective SRE, allowing teams to gain visibility into system performance, detect issues early, and respond proactively. This involves tracking a wide range of metrics, from system health and usage to user experience indicators, ensuring that you have a holistic view of your service's performance and reliability.

Automate data collection and analysis

Automating the collection and analysis of SRE metrics frees up valuable time for SRE teams to be proactively focused on strategic tasks rather than manual monitoring. Implementing the best tools and platforms that support automation can improve the quality of work done by your engineers as well as the outcomes.

Utilize error budgets

Error budgets establish a quantifiable tolerance for system failures, aligning operational and development teams on acceptable risk levels. By using error budgets, teams can balance the need for rapid innovation with the necessity of maintaining system reliability, making informed decisions about when to halt feature releases in favor of focusing on stability improvements.

Embrace continuous improvement

The principle of continuous improvement encourages teams to iteratively enhance their systems and processes based on real-world feedback and performance data. By regularly reviewing SRE metrics and learning from incidents, teams can identify areas for optimization, reduce toil, and incrementally improve service reliability and efficiency over time.

Use an internal developer portal

An internal developer portal serves as a centralized hub for accessing tools, documentation, and resources related to SRE practices. The right solution acts not only as a single point of access for SRE metrics, dashboards, and best practices, it should also be a predictive engineering system of record. This allows us to look at data generated by every part of the developer workflow, giving SRE metrics as well as crucial context by which to measure them.

Deciding that you want to monitor some metrics and setting up dashboards does not suffice if you want to effectively integrate SRE into your team’s workflows. We recommend starting to build a culture of monitoring and visibility from day one that is both constructive and sustainable in the long term.

Consider the unique needs of your business when assigning metrics. If, for instance, you are in the social media field, then you likely will not gain as much from monitoring error rates as you will from keeping track of latency. Having response times that do not go beyond a few milliseconds may be more of a priority than letting a few requests slide from time to time.

Your priorities will change, and your metrics should evolve with them. For one year, you might be more concerned with incident management than having your team resolve incidents rapidly. In that case, you may be interested in tracking the mean time to recovery and latency.

Finally, make sure that developers are fully engaged in the process. Based on their knowledge and estimates of the services, they should decide on certain metrics that are important for observability and follow through by tracking them independently. They will have ideas about what might result in high change failure rates, for instance, and that may be important to them in their software development role.

In addition to tracking their own metrics, the SRE team can keep an eye on whether and how the other teams are choosing the integration of SRE ideas and tools into their workflows. This emphasizes the need to make this responsibility decentralized, allowing SREs to collaborate with developers to pursue their goals as well as handling responsibilities internal to the team.

Finally, monitoring SRE metrics, including for individual teams, is not a one-time affair. Like testing, it must be introduced as a continuous process. This is so that you can see the impact of the actions you take after alerting on certain metrics. Priorities can shift with time, but choosing and consistently tracking metrics will be most impactful when done over the long term.

How can Cortex help track and improve SRE metrics?

Cortex's IDP serves as a single source of truth, allowing SRE teams to bridge development and ops teams, monitor service performance, drive adherence to standards, and improve service quality—all in one place.

As well as monitoring and aggregating metrics, Cortex can build these targets into Scorecards that help decentralize responsibility and hold teams accountable.

Setting and tracking SRE metrics is crucial to ensuring that your services are healthy and that your teams have concrete and meaningful goals to work towards. Cortex’s service catalog offers the perfect opportunity to start building an SRE mindset in your team. It offers visibility into every service that comprises your infrastructure. You can, for instance, choose to observe how a service is performing against a specific SLO.

After defining the SRE metrics and golden signals that you want to prioritize, the next step involves collecting and analyzing data to integrate these into your workflow. Cortex allows you to synchronize this data through our standard integrations or create customized apps using plug-ins to integrate within the IDP's interface.

You can also use the Eng Intelligence tool to extract data from the software development life cycle (SDLC), pinpointing critical insights related to site reliability engineering. This aids in identifying bottlenecks and digging into root causes rather than just surface-level issues.

These metrics are only useful to the extent that they are integrated into your workflows, and this is where the developer homepage comes in. This allows you to use Slack and Teams notifications to ensure that developers can be fully involved in the process, allowing SRE to remain decentralized and effective.

To learn more about how Cortex can support your site reliability engineering goals, check out this on-demand webinar or book a demo today.

SRE
Microservices
By
Cortex
What's driving urgency for IDPs?