December 18, 2023 By Ben Ball 2 min read

“I want it now!”—This isn’t just a phrase spoiled children sing; it’s what we demand every time we click a link, stream video content or access an online application.

As internet traffic grows in volume and complexity, our expectations for faster response times from the services and content we use rises. We often click away if instant results aren’t provided. For businesses delivering applications and services, the fierce urgency of “now” is a logistical headache. Internet traffic must navigate different clouds, content delivery networks (CDNs) and other core services on the back end. Achieving consistently high performance requires an efficient routing system, optimizing traffic between the services your application depends on.

IBM® NS1 Connect® uses the power of domain name systems (DNS) to automatically steer traffic to the best-performing service available, enabling you to meet user expectations. IBM® NS1® employs simple rules and monitoring data to dynamically switch endpoints, all based on your preset rules and priorities. In NS1 Connect, NS1 traffic steering configurations apply to individual DNS zone records. These configurations determine how NS1 Connect handles queries for each record, determining which answers to provide. Various filter chains use unique logic to process queries, enabling you to create combinations that are tailored to your operational or business needs.

Optimizing application performance can vary by business and NS1 Connect offers different traffic steering options:

  1. Round Robin (Shuffle): Distributes application traffic evenly across multiple endpoints, preventing overload and overdependence on any single service provider. Filters in the chain include “Up” to check endpoint availability and “Shuffle” to distribute traffic randomly among designated service providers.
  2. Round Robin (Shuffle) with session persistence: Balances traffic load while maintaining a consistent user experience. NS1 Connect uses the same logic to distribute traffic among different service providers, while defaulting to the same provider for queries that originate from the same location. This prevents mid-stream changeovers for the sake of load balancing. It uses “Sticky Shuffle” to ensure that load balancing doesn’t disrupt ongoing sessions.
  3. Distribute application traffic based on site capacity: Favors specific services, sending more traffic to cheaper or better-performing options while maintaining availability for load balancing. “Weighted Shuffle” and “Weighted Sticky Shuffle” distribute traffic based on predefined weights.
  4. Send users to the closest location (geotargeting): Directs traffic to endpoints based on the originating location with options like geotarget country, geotarget region and geotarget latlong to specify granularity: A) Geotarget country narrows down the answers to service providers that match the originating country of the query. If no service provider is available in that country, this part of the chain will effectively be skipped; B) Geotarget region narrows down answers to queries with metadata indicating the geographical region; and C) Geotarget latlong chooses the closest service provider based on a calculation of the distance between where the query originated and the GeoIP database.
  5. Distribute application traffic based on current site load (shed load): Enforces limits on CDNs or service providers in real time. The “shed load” filter steers traffic to compliant providers based on load-related metrics, helping manage contractual or cost limits automatically. More information about the settings for the shed load filter is in our NS1 documentation portal.

In summary, IBM NS1 Connect offers a range of traffic steering options to meet diverse business needs to help ensure optimal application performance in the “now” era.

Visit the NSI documentation portal today
Was this article helpful?
YesNo

More from Automation

4 key metrics to know when monitoring microservices applications running on Kubernetes

3 min read - Understanding how microservice applications works on Kubernetes is important in software development. In this article, we will discuss why observing microservice applications on Kubernetes is crucial and several metrics that you should focus on as part of your observability strategy. Why should you observe microservice health running on Kubernetes and what are the Kubernetes metrics you should monitor? Consider a large e-commerce platform that utilizes microservices architecture deployed on Kubernetes clusters. Each microservice, responsible for specific functionalities such as inventory…

Deployable architecture on IBM Cloud: A look at the IaC aspects of VPC landing zone 

5 min read - In the ever-evolving landscape of cloud infrastructure, creating a customizable and secure virtual private cloud (VPC) environment within a single region has become a necessity for many organizations. The VPC landing zone deployable architectures offers a solution to this need through a set of starting templates that can be quickly adapted to fit your specific requirements. The VPC Landing Zone deployable architecture leverages Infrastructure as Code (IaC) principles, that allow you to define your infrastructure in code and automate its…

Deployable architecture on IBM Cloud: Simplifying system deployment

3 min read - Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention. There are several key characteristics of a deployable architecture, which include: Automation: Deployable architecture…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters