August 9, 2023 By Ajuma Bella Salifu 3 min read

In our blog series, we’ve debunked the following observability myths so far:

Observability refers to the ability to gain insights into a system’s internal behaviour based on external outputs or signals. The primary goal of observability is to provide IT teams with the necessary tools to understand system performance, identify problems and troubleshoot effectively.

In this article, we’ll tackle the common myth that observability is only relevant and beneficial for large-scale systems or complex architectures.

Why is this a myth?

The problem with this misconception is that it can deter smaller organizations or teams from adopting observability practices. For many companies, however, their applications are their business, and the lack of an observability framework can limit their ability to diagnose issues and optimize systems in a timely manner.

Let’s look at web applications within smaller systems applications as an example. Even a simple web application can benefit from observability by implementing basic logging and metrics. By tracking user interactions, request/response times and error rates, developers can detect anomalies and identify areas for improvement. This can lead to a better user experience and ultimately enhance the application’s success.

Fact: Observability benefits systems of all sizes, from small applications to large-scale distributed architectures

The value of observability lies in gaining insights into system behaviour, identifying performance bottlenecks and troubleshooting issues effectively. Even simple applications can benefit from observability by proactively monitoring critical components and detecting anomalies early on.

Moreover, consider the case of a microservices architecture. Although each microservice might be relatively simple on its own, the interactions and dependencies between them can quickly become complex. In such scenarios, observability becomes crucial to trace requests across different services, measure latency and pinpoint performance bottlenecks.

Start-ups and small companies that are rapidly scaling can greatly benefit from observability. As their systems grow in complexity, they face new challenges and potential failures. By adopting observability early on, these organizations can build a solid foundation for monitoring and troubleshooting, ensuring smoother growth and minimizing the risk of unexpected issues.

Even individual developers working on personal projects can gain insights from observability. By using real-time monitoring to see relevant events and metrics during development and testing, they can spot problems early, leading to more robust and reliable applications.

A notable example of the importance of observability occurred in 2012 when a financial services firm lost $400+ million in less than an hour due to a software glitch. This catastrophic failure was caused by a code deployment error. This incident emphasizes the criticality of observability in industries like financial systems, where seemingly small errors can have severe consequences.

Observability provides essential insights and diagnostic capabilities for systems of all sizes and architectures. This means better system performance, greater reliability and better user experiences. No matter the size of your organization, consider implementing an observability solution as an integral part of your software engineering and monitoring practices. It’s one of the best ways to stay competitive and resilient in today’s ever-changing, technology-driven landscape.

Observability by the numbers

High performance during an unprecedented boom: During the lockdown in 2020, online commerce surged to unprecedented volumes across the world. That year, GittiGidiyor saw mobile sales revenue surge by 82% and was able to accommodate a 4-5x increase in its overall volume during Black Friday.

Improving patient outcomes: Mayden creates digital technology that changes what’s possible for clinicians and patients. They use observability to support the delivery of mental health services, and their main product, iaptus, helps deliver mental health services to more than five million patients in the UK.

IBM’s approach to enterprise observability

IBM’s observability solution, IBM Instana, is purpose-built for cloud-native and designed to provide high-fidelity data automatically and continuously (e.g., one-second granularity and end-to-end traces) with the context of logical and physical dependencies across mobile, web, applications and infrastructure. Our customers have been able to achieve tangible results using real-time observability.

Experience IBM Instana firsthand

What’s next?

Stay tuned for our next blog, where we debunk another common myth about observability:  “Observability is expensive.” Get ready to discover the broader benefits and applications that await.

Was this article helpful?
YesNo

More from Automation

Deployable architecture on IBM Cloud: Simplifying system deployment

3 min read - Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention. There are several key characteristics of a deployable architecture, which include: Automation: Deployable architecture…

Understanding glue records and Dedicated DNS

3 min read - Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.   In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated…

Using dig +trace to understand DNS resolution from start to finish

2 min read - The dig command is a powerful tool for troubleshooting queries and responses received from the Domain Name Service (DNS). It is installed by default on many operating systems, including Linux® and Mac OS X. It can be installed on Microsoft Windows as part of Cygwin.  One of the many things dig can do is to perform recursive DNS resolution and display all of the steps that it took in your terminal. This is extremely useful for understanding not only how the DNS…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters