Application Performance Archives - Aryaka The Cloud-First WAN. Tue, 17 Sep 2024 11:12:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Make Security Simple: Streamline Policies in Unified SASEBalancing Configuration and Control is critical for reducing security risks and management complexity https://www.aryaka.com/blog/streamline-sase-security-policies/ https://www.aryaka.com/blog/streamline-sase-security-policies/#respond Thu, 07 Sep 2023 11:44:00 +0000 https://www.aryaka.com/?p=46872 The Secure Access Service Edge (SASE) service, along with its associated architecture, comprises a powerful amalgamation of multiple security components. These include a stateful inspection firewall, Intrusion Detection and Prevention System (IDPS), DNS security, DoS/DDoS protection, Secure Web Gateway (SWG), Zero Trust Network Architecture (ZTNA), Cloud Access Security Broker (CASB), and many more. These components […]

The post Make Security Simple: Streamline Policies in Unified SASE<h5><i>Balancing Configuration and Control is critical for reducing security risks and management complexity</i></h5> appeared first on Aryaka.

]]>
Make Security Simple: Streamlining Security Policies in Unified SASE

The Secure Access Service Edge (SASE) service, along with its associated architecture, comprises a powerful amalgamation of multiple security components. These include a stateful inspection firewall, Intrusion Detection and Prevention System (IDPS), DNS security, DoS/DDoS protection, Secure Web Gateway (SWG), Zero Trust Network Architecture (ZTNA), Cloud Access Security Broker (CASB), and many more. These components grant administrators the ability to configure them through policies, offering a robust shield to protect an organization’s assets against threats while adhering to specific access requirements.

The Role of Policy Configuration

Policy configuration plays an indispensable role in enforcing security within the SASE framework. The repercussions of badly configured policies can range from resource threats and data leaks to unintended, overly permissive access. In today’s industry landscape, organizations grapple with two predominant approaches to security policy management:

  1. The Single Table Approach: A consolidated policy table containing a myriad of policies that span threat management and various access control scenarios across all SASE components.
  2. The Multi-Table Approach: Multiple policy tables, each addressing specific aspects such as threat protection, access control, different applications, and user groups.

Striking a Balance in Policy Management

The expectation from SASE is clear: it should offer easily manageable security policies and simplified troubleshooting procedures. Achieving this necessitates a balanced approach. One effective strategy to mitigate policy complexity based on organizations requirements. Bigger organizations may require compartmentalization with multi-Table approach where policy table granularity is defined based on security functions, application resources, and subject (users/groups). Smaller organizations may prefer compartmentalization with a lesser number of policy tables combining multiple types of access controls or even combining threat protection with access control. This flexible approach minimizes the number of policies requiring simultaneous management, rendering them more manageable.

However, it’s important to exercise caution to avoid excessive compartmentalization, which can introduce its own set of challenges. Administrators may find themselves navigating through multiple policy tables to identify and address issues, potentially causing delays in resolution.

Understanding the Key Requirements

Before delving deeper into the intricacies of policy management, it’s crucial to understand the specific requirements that organizations must address within the SASE framework. Key areas include:

Need for Role-Based Security Configuration Management in SASE Environments

Secure Access Service Edge (SASE) components offer comprehensive security, encompassing threat protection and access control for a wide range of resources across diverse organizations, including their workforce, partners, and guests. Within this security framework, organizations often have distinct categories of administrators responsible for different aspects of security.

For example, an organization may have one group of administrators dedicated to managing threat protection while another group focuses on access controls. Within these broad categories, it’s common for organizations to establish various administrative roles tailored to specific types of threat protection and access control. Let’s delve into some practical examples:

Threat Protection Roles:

  • Intrusion Detection and Firewall Configuration: Administrators with the “threat-protection-ngfw-role” are granted access to configure Intrusion Detection and firewall settings within the SASE environment.
  • Reputation Controls: Administrators holding the “threat-protection-reputation-role” can manage settings related to IP reputation controls, URL-based reputation controls, domain-based reputation controls, file reputation controls, as well as cloud-service and cloud-organization reputation controls.
  • Malware Protection: Administrators with the “threat-protection-malware-protection-role” have the authority to configure settings specifically pertaining to malware protection controls.

Access Control Roles:

  • SWG Configuration: Administrators designated as “access-control-Internet-role” are responsible for managing Secure Web Gateway (SWG) configurations.
  • SaaS Application-Specific Access Control: Roles like “access-control-saas1app-role” and “access-control-saasNapp-role” focus on configuring access control policies for specific applications (SaaS Service 1 and SaaS Service N), ensuring fine-grained control.
  • Enterprise Application Management: Roles such as “access-control-hostedapp1-role” and “access-control-hostedappM-role” are dedicated to handling access control configurations for enterprise-level applications, app1 and appM.

In cases where an organization uses multi-tenant applications, additional roles may be introduced to manage security configurations effectively. For instance, roles can be established to configure policies for the organization’s workforce, per-tenant workforce, and even guests. Consider an application “X” with security configurations managed by different sets of administrators:

  • Owner Workforce Security: Administrators with “access-control-hostedappX-role” and “access-control-owner-workforce-role” collaborate to manage access control configurations for application “X” for the owner’s workforce.
  • Application Tenant-Specific Workforce for Tenant: Roles like “access-control-hostedAppX-role” and “access-control-owner-tenantA-workforce-role” enable administrators to configure access control settings for tenant A’s workforce.
  • Application Tenant specific workforce for Tenant B: For a multi-tenant application “X,” various roles, such as “access-control-hostedAppX-role” and “access-control-owner-tenantB-workforce-role,” facilitate the management of access control configurations for tenant B’s workforce.

Additionally, even non-multi-tenant enterprise applications may require separate administrators for different workforce segments. For instance:

  • Engineering Department: Administrators with “access-control-hostedappY-role” and “access-control-eng-role” focus on managing access control configurations for application “Y” within the engineering department.
  • Sales & Marketing: Roles like “access-control-hostedappY-role” and “access-control-sales-role” are designated for configuring access control settings for sales and marketing teams.
  • IT Department: Administrators with “access-control-hostedappY-role” and “access-control-it-role” have responsibilities for access control configurations pertaining to the IT department.

A significant advantage of role-based security configuration management is its ability to provide granular control tailored to specific responsibilities. Contrast this approach with the challenges that can arise when using a single, all-encompassing policy table:

  • Error-Prone: Multiple administrators working with the same policy table and overlapping permissions may inadvertently introduce errors when adding, deleting, or modifying policies.
  • Troubleshooting Complexity: Resolving issues within a monolithic policy table can be time-consuming and challenging.
  • Policy Overload: Consolidating all policies into a single table, covering various applications and threat protection scenarios, can lead to a cumbersome and unwieldy policy management experience, as well as potential performance challenges during policy evaluation.

In conclusion, adopting role-based security configuration management within SASE environments empowers organizations to efficiently delegate responsibilities, enhance security, and streamline policy management while avoiding the complexities associated with single-table approaches.

Working alongside with Configuration Change Control Management

Organizations are increasingly embracing change control management for all configurations, including SASE configuration, to proactively detect and rectify configuration errors before they are implemented. This practice not only serves as a safeguard but also introduces a secondary layer of scrutiny, allowing a second set of eyes to review and approve configurations before they take effect.

Security policy configurations are applied directly within the traffic flow, making any errors in policies potentially disruptive to services and incurring direct financial consequences.

To cope with the inherent complexity of security policy configuration, it’s common practice to serialize changes. This means that when modifying one type of configuration, no other configuration sessions of the same type are initiated until the previous one is either applied or revoked. However, when using a single policy table that encompasses all threat and access control functions, serializing changes can introduce delays in configuration adjustments performed by other administrators.

In contrast, a multi-table approach can effectively address this scenario, allowing different administrators to concurrently work on distinct tables, thus streamlining the configuration change process.

Not all organizations share the same requirements:

SASE is typically offered as a service, and SASE providers may serve multiple organizations as customers. These organizations can vary significantly in terms of size, regulatory requirements, and the diversity of roles within their structures. Some organizations might host multiple applications, either On-Premises or in the cloud, while others may exclusively rely on services from SaaS providers, and some may incorporate a combination of both.

Furthermore, certain organizations may not have a need for various administrative roles or multiple administrative users. In scenarios where organizations have only a limited number of applications and lack the complexity of multiple administrative roles, they may find value in using fewer policy tables.

SASE is expected to be designed to offer the flexibility required to accommodate these diverse needs, including the option of using consolidated policy tables for multiple relevant security functions and applications.

Avoiding confusing configurations:

Certain SASE solutions, in their pursuit of simplification as discussed before, opt for a single, all-encompassing policy table where policies can be configured with values for various matching attributes. During traffic processing, policy selection is based on matching the values from the incoming traffic and other contextual information against the attribute values specified in the policies.

However, it’s crucial to recognize that during traffic processing, not all attributes of the traffic are readily known. For instance, in the case of stateful inspection firewalls, only a limited set of traffic values can be extracted, such as the 5-tuple information (source IP, destination IP, source port, destination port, and IP protocol). Meanwhile, for proxy-based security components like SWG, ZTNA, and CASB, the extraction of attribute values can vary and may involve distinct stages, notably the Pre-TLS inspection and Post-TLS inspection phases.

Before TLS inspection/decryption, many HTTP attributes remain unknown. It’s only after TLS decryption that additional attributes, such as access URI path, HTTP method, and request headers, become available for evaluation.

As administrators responsible for configuring security policies, it is impractical to expect administrators to keep track of which attributes are valid at various stages of packet processing while defining policies. While some security solutions claim that irrelevant attributes are not considered in policy evaluation, determining which attributes are pertinent and which are not can be challenging when inspecting complex policies.

We firmly believe that amalgamating policy tables across multiple stages into a single table creates complexity and confusion for administrators. Such an approach can be challenging to comprehend and lead to potentially perplexing configurations. To ensure clarity, it is advisable to create policies within a given table that include only relevant attributes for consistent and straightforward evaluations.

Optimizing Deny and Allow Policy Tables:

Certain security solutions adopt a structure where they maintain separate “Deny” and “Allow” policy tables. Within this setup, policies defined in the “Deny” list take precedence and are evaluated first. If no matching policy is found in the “Deny” table, the evaluation proceeds to the “Allow” policy table. However, this division of policies into two distinct tables can pose challenges for administrators.

We firmly advocate for a more streamlined approach, where any given policy table is presented as an ordered list of policies. In this arrangement, each policy explicitly specifies its action—whether it’s “Deny,” “Allow,” or any other desired action. During traffic processing, policy evaluation follows a logical progression from the highest priority policy to the lowest priority policy until a match is found. Once a matching policy is identified, the corresponding action is applied to the traffic. In cases where no matching policy is encountered, a default action, such as “fail open” or “fail close,” is triggered as defined by the organization’s security policy.

This approach simplifies policy management and enhances clarity for administrators by consolidating policies within a single and ordered list irrespective of the policy action values, thereby minimizing complexity and streamlining the policy evaluation process. Not separating policy tables based on action values also enabled SASE solution providers to introduce new action values in future relatively easily.

Creating Flexible and Expressive Policies:

As you’ve gathered, administrators craft policies by defining sets of values for matching attributes. Traditionally, there has been a common understanding of how policy matching operates during traffic evaluations: a policy is considered a match only when all the attribute values specified in the policy align perfectly with the values of the incoming traffic session. These values can either be extracted directly from the traffic or inferred from contextual information, such as the authenticated user context and the device context responsible for initiating the traffic. Essentially, this matching process involves an ‘AND’ operation across all attributes of the policy.

However, as security technologies have evolved, many security devices have introduced a more flexible approach, granting administrators the ability to assign multiple values to attributes. In this evolved paradigm, a match is established if the runtime context information aligns with any of the values specified for the policy attributes. In essence, the matching process now combines an ‘AND’ operation across attributes with an ‘OR’ operation across the values associated with those attributes.

Organizations stand to benefit significantly from this flexibility when creating comprehensive policies. It reduces the overall number of policies required while maintaining readability. However, these multi-value attributes are just one step in the right direction, and further enhancements are often necessary to meet organizations’ unique requirements:

Support for “NOT” Decoration: Administrators require the ability to define policy attribute values with a “NOT” decoration. For instance, it should be possible to specify a ‘source IP’ attribute value as ‘NOT 10.1.5.0/24,” indicating that the policy will match successfully when the traffic session’s source IP does not belong to the 10.1.5.0/24 subnet.

Support for Multiple Instances of an Attribute: Many traditional security devices support only one instance of a given attribute within a policy. To create comprehensive policies, the ability to include multiple instances of the same attribute within a single policy is essential. For example, an administrator may want to allow sessions from any IP address in the 10.0.0.0/8 subnet while simultaneously denying traffic sessions from the 10.1.5.0/24 subnet. This should be achievable within a single policy, perhaps by specifying ‘source IP’ values twice: “source IP == 10.0.0.0/8” and “source IP == NOT 10.1.5.0/24.” This prevents the need to create two separate policies and allows for more intuitive policy management.

Support for Decorations for String Type Values: Attributes that accept string values, such as URI paths, domain names, and many HTTP request headers, benefit from decorations like ‘exact,’ ‘starts_with,’ and ‘ends_with.’ These decorations enhance the creation of expressive policies.

Support for Regular Expression Patterns: In some cases, policies require pattern matching within traffic values. For instance, a policy may dictate that a traffic session is allowed only if a specific pattern is present anywhere in the ‘user agent’ request header value. Support for regular expression patterns is essential in such scenarios.

Support for Dynamic Attributes: While traditional attributes in policies are fixed and predefined, SASE environments sometimes require dynamic attributes. Consider request and response headers or JWT claims, where standards coexist with numerous custom headers and claims. SASE should empower administrators to create policies that accommodate custom headers and claims. For example, SASE should allow the creation of policies with the request header ‘X-custom-header’ as an attribute and the value ‘matchme.’ At traffic time, any HTTP sessions with ‘X-custom-header’ as one of the request headers and ‘matchme’ as the value should match the policy.

Support for Objects: This feature allows the creation of various types of objects with values that can be used as attribute values in policies, rather than specifying immediate values. Objects can be referenced across multiple policies, and any future value changes can be made at the object level, simplifying policy modifications, and ensuring consistency.

These enhancements contribute to the creation of flexible, expressive, and efficient security policies, empowering organizations to tailor their policies to unique security needs and scenarios effectively.

Enhancing Policy Integration with traffic modifications

Certain security functions necessitate modifications to traffic, with the most common use cases involving the addition, deletion, or modification of HTTP request/response headers and their values, query parameters and their values, and even the request/response body. These modifications can vary significantly based on administrators’ configurations. Often, the specific modifications depend on traffic values, such as the destination application/site service, as well as contextual information available during traffic runtime.

Rather than maintaining a separate policy table for traffic modifications, it is often more efficient to include these modification objects within the access policies themselves. This approach streamlines policy management and ensures that modifications are directly aligned with the policies governing traffic behavior.

One prominent scenario where traffic modification is essential is in the context of Cloud Access Security Broker (CASB) solutions, particularly when organizations require multi-tenancy restrictions. These restrictions often involve the addition of specific request headers and values to enforce collaboration-specific policies. Additionally, there are other instances, such as the addition of custom headers for end-to-end troubleshooting and performance analysis, where traffic modifications play a crucial role.

Consequently, organizations expect SASE solutions to support policies that seamlessly integrate with modification objects. During traffic processing, traffic modifications are executed when the matched policy is associated with the appropriate modification objects, providing a unified and efficient approach to traffic management and policy enforcement.

Enhancing Observability:

It is common practice to log every traffic session at the conclusion of the session for the purpose of observability. In cases involving substantial or “elephant” sessions, it is also customary to periodically log access information. These session logs typically contain valuable data, including traffic metadata, actions taken during the session, and details regarding the packets and bytes transferred between the client and server.

One significant advancement offered by SASE is the consolidation of security functions and the adoption of single-pass, run-time-completion architectures, resulting in a unified session log. This contrasts with non-SASE security deployments where each individual security component generates its own session log, often containing information about the policy that was matched and critical attribute values used in the matching process. Importantly, while SASE generates a single log, there is an expectation that it should not compromise on the inclusion of critical information.

When a traffic session is allowed due to multiple policy evaluations across various security functions and policy tables, the resulting log should encompass information about every policy that was matched. Moreover, if a policy matches due to the values of specific traffic or context attributes, the log should provide precise details about the attribute values that led to the policy match.

Given that organizations rely on comprehensive logs for effective observability, SASE solutions are expected to furnish thorough information in the logs, ensuring that administrators have access to the data they need to monitor and analyze network traffic effectively.

SASE Approach to Policy Management:

It’s important to recognize that not all SASE solutions are identical. Organizations should carefully assess whether a particular SASE solution aligns with their specific organizational requirements without sacrificing usability. While organizations may not initially possess all the requirements listed above, it’s only a matter of time before these requirements become increasingly relevant and essential to their operations.

Organizations having all the aforementioned requirements gain the advantage of complete flexibility in tailoring their SASE policies to their specific needs. On the other hand, organizations that do not currently have all these requirements often seek a simpler user experience while keeping an eye on introducing additional functionality as their requirements evolve. This approach allows organizations to strike a balance between their current needs and future growth, ensuring that their SASE solution remains adaptable and responsive to changing circumstances.

Unless SASE solutions provide full flexibility, customization becomes challenging. Therefore, we believe SASE solutions should provide the following core capabilities:

  1. Modular Policy Management: SASE solutions encompass multiple security functions, each with its own set of policy configurations. These configurations should include options to enable/disable, set default action in case of no policy match, manage collection of multiple policy tables, define multiple policies within each policy table, establish an ordered list of policies, and set action settings, modification objects, matching attributes, and values for each policy.
  2. Policy Chaining: To enable more specific and granular policies, SASE solutions should support policy chaining. This means allowing the arrangement of policies across multiple policy tables in a collection. For example, organizations can have separate policy tables for different applications, with the main table policies using application/domain names as matching criteria to select the appropriate policy tables. This is typically accomplished through the use of policies featuring an action called ‘Jump,’ which redirects policy evaluation to the referenced policy table. The concept of policy chaining gained popularity with Linux IPTables, and many security solutions subsequently incorporated this functionality.

The comprehensiveness of security functions within SASE can be extensive and may include:

  • NGFW (Next-Generation Firewall): Providing L3/L4 access control, DDoS protection, IP reputation, domain reputation and, Intrusion Detection and Prevention System (IDPS)
  • SWG (Secure Web Gateway): Offering TLS inspection, pre-TLS web access control, post-TLS web access control, URL reputation, file reputation, and malware protection.
  • ZTNA (Zero Trust Network Access): Similar to SWG but focused on securing hosted applications.
  • CASB (Cloud Access Security Broker): Covering cloud service reputation and cloud service access control.
  • DLP (Data Loss Prevention): Implementing access control based on Personally Identifiable Information (PII), standard confidential documents, and enterprise-specific sensitive documents.

The flexibility of policy management for each security function, along with the ability to manage policies within each function via multiple policy tables with policy chaining, is a powerful feature. Geo-distributed organizations with various regulatory requirements can particularly benefit from this flexibility.

However, smaller organizations may prefer some sort of consolidation of policy tables. In such cases, it should be possible to customize the configuration by:

  • Consolidating all pre-TLS security function configurations into a single collection of policy tables across multiple SWG/ZTNA components.
  • Consolidating all post-TLS security function configurations into another single collection of policy tables across multiple SWG/ZTNA components.
  • Retaining CASB, malware, and DLP functions as separate entities as these require complex policy definitions.
  • Opting for a single policy table within the policy table collection, thus avoiding policy chaining.

Therefore, organizations should seek SASE services that provide full flexibility while also offering custom controls to consolidate configurations for relevant security functions. This approach ensures that SASE policies are tailored to an organization’s specific needs while maintaining ease of management and scalability as requirements evolve.

Balancing User Experience with Future-Proof Flexibility

Security policy management has historically been a complex endeavor. Many products specialize in policy management for specific security appliances, resulting in a fragmented landscape. SASE addresses this complexity by consolidating multiple security appliances into a unified solution. While this consolidation offers advantages, it also introduces complexities of its own.

Traditional approaches to policy management, such as a single policy table, may seem appealing initially. However, they present numerous challenges and often fall short of meeting the requirements outlined in this article. Conversely, having an excessive number of policy engines can also lead to complexity. Striking the right balance between flexibility and simplicity is paramount.

One significant challenge in the industry is the proliferation of policies. An excessive number of policies not only degrades the user and troubleshooting experience but also carries performance implications. The multi-table approach and policy expressiveness, as described earlier, are essential strategies for reducing the volume of policies within policy tables.

SASE solutions are increasingly addressing these complexities by providing greater sophistication in policy management. It is our belief that SASE solutions will continue to evolve, implementing many of the requirements detailed in this article in the very near future. This evolution will empower organizations to strike the optimal balance between user experience, flexibility, and performance, ensuring that their security policies remain effective and adaptable in a rapidly changing threat landscape.

  • CTO Insights blog

    The Aryaka CTO Insights blog series provides thought leadership for network, security, and SASE topics. For Aryaka product specifications refer to Aryaka Datasheets.

The post Make Security Simple: Streamline Policies in Unified SASE<h5><i>Balancing Configuration and Control is critical for reducing security risks and management complexity</i></h5> appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/streamline-sase-security-policies/feed/ 0
Enhancing SaaS Security: Next-Gen ZTNA for Authentication & Authorization https://www.aryaka.com/blog/next-gen-ztna-for-authentication-authorization/ https://www.aryaka.com/blog/next-gen-ztna-for-authentication-authorization/#respond Tue, 11 Jul 2023 12:37:21 +0000 https://www.aryaka.com/?p=45799 Authentication & Authorization comes in various colors The Zero Trust Network Access (ZTNA) component of SASE is designed to provide secure inbound access to enterprise private applications. In line with the core principle of identity-based access control in Zero Trust Architecture (ZTA), ZTNA plays a vital role in authenticating users and enforcing access controls based […]

The post Enhancing SaaS Security: Next-Gen ZTNA for Authentication & Authorization appeared first on Aryaka.

]]>

Authentication & Authorization comes in various colors

The Zero Trust Network Access (ZTNA) component of SASE is designed to provide secure inbound access to enterprise private applications. In line with the core principle of identity-based access control in Zero Trust Architecture (ZTA), ZTNA plays a vital role in authenticating users and enforcing access controls based on user types, groups, and roles on every inbound session to the Enterprise applications.

ZTNA security offers significant advantages in the following scenarios:

  • Legacy Applications: Legacy applications that lack built-in security measures are often not exposed to Work-From-Anywhere (WFA) users due to security concerns. By utilizing ZTNA to front-end these legacy applications, HTTPS termination with certificate management, authentication using protocols such as OIDC, and authorization based on context-aware access controls can be provided. This enables legacy applications to be safely accessed by WFA users over the Internet.
  • Broken Applications: Despite being developed with security in mind, some applications may not have been updated for an extended period. These applications may lack proper certificate management, with outdated or no support for uploading new certificates or auto-renewal. ZTNA can act as a security replacement for these broken applications, ensuring secure access while overcoming their security limitations.
  • New Application Architecture: Modern enterprise applications are often designed with security considerations shifted to external entities like ZTNA and service mesh technologies. This approach relieves application developers from the burden of handling HTTPS, authentication, and authorization, as security is offloaded to the front-end entity. By centralizing security management, benefits such as uniform security policy enforcement, increased productivity in application development, and simplified maintenance are achieved. Additionally, as security updates are handled externally, the frequency of patch releases aimed at addressing security issues can be significantly reduced.

Many ZTNA solutions today are good at front-ending simple Enterprise applications, but they fail to deliver to provide authentication & authorization for multi-tenant applications such as SaaS applications.

ZTNA’s Role in SaaS Applications: In the context of Software-as-a-Service (SaaS) applications, ZTNA will play a vital role in strengthening and enhancing the authentication and authorization mechanisms, in my view. SaaS applications have specific requirements, including multi-tenancy, resilience against DoS/DDoS attacks, and robust protection against authentication bypass and privilege escalation attacks. This article will delve into the features of next-generation ZTNA that can assist in offloading or enhancing the authentication and authorization processes for SaaS applications. Please note that this article will not cover other features of ZTNA, such as WAAP (Web Application and API Protection), HTTPS termination, traffic management of incoming sessions to various application instances, webification of SSH/RDP/VNC services, and making applications invisible from port scanners. Its primary focus is on the authentication and authorization aspects of ZTNA.

It’s important to note that there can be confusion between the roles of CASB (Cloud Access Security Broker) and ZTNA in the context of SaaS. The CASB component of SASE focuses on securing connections to SaaS services used by enterprises, where enterprises are consumers of SaaS and CASB services. On the other hand, ZTNA, in the context of SaaS, is designed to protect the SaaS application itself, making SaaS companies consumers of ZTNA services. This differentiation is essential to understand the distinct roles and responsibilities of CASB and ZTNA in the SASE solutions.

In a previous article about identity brokers, we explored the numerous benefits of integrating brokers into SASE solutions. The advantages discussed primarily revolved around the modularity and simplicity of design, ultimately enhancing the resilience of SASE solutions. In this article, we will delve into the pivotal role of identity brokers in supporting complex applications, particularly focusing on SaaS applications.

What are the challenges with multi-tenant applications?

ZTNA of SASE excels in providing robust support for policy-based authorization. The authorization engines within SASE offer the capability to manage multiple policy tables, with each table containing multiple policies. Each policy is composed of multiple rules and specifies the action to be taken upon a successful match. The rules themselves encompass various matching attributes, which can be classified as source and destination attributes.

Destination attributes primarily pertain to the applications’ resources being accessed, such as URIs and the methods (e.g., GET, PUT, POST, DELETE) used to interact with those resources. On the other hand, source attributes are typically associated with the subjects accessing the resources. These attributes encompass user-related attributes like name, group, role, authentication service that validated the user credentials, and other user claims. They also include device context attributes, which capture the secure posture of the devices utilized by the subject and the location of the device from which the user is accessing the resources.

However, many ZTNA solutions fall short when it comes to addressing comprehensive authentication scenarios, often limiting their capabilities to non-SaaS applications. The inclusion of an Identity Broker in SASE/SSE solutions is a progressive step towards achieving comprehensive authentication across all types of applications. While it may be argued that SaaS vendors possess the capability to handle authentication and authorization within their applications, the landscape has evolved significantly.

In today’s agile environment, SaaS providers increasingly recognize the advantages of offloading security responsibilities to external entities like SASE. By doing so, they can benefit from increased productivity and heightened confidence in their overall security posture. Furthermore, this approach allows new SaaS providers to enter the market more swiftly, as they can offload authentication and authorization to an external entity and focus primarily on their core business logic. SASE solutions can play a pivotal role in supporting these new SaaS providers.

It is our belief that SASE solutions should and will be ready to take up this challenge of providing authentication and authorization security on behalf of complex applications such as SaaS applications. The following scenario gives one representative example of a SaaS application and explores how SASE, by integrating identity brokers, can help in the delegation of authentication & authorization from the applications.

Consider this example SaaS application (hosted at app.example.com) scenario consisting of multiple API resources:

/app.example.com/service-admin-api/ This API space is exclusively for application service provider administrators.
/app.example.com/tenants//tenant-admin-api/ Only tenant admins can access this API space under their respective tenant.
/app.example.com/tenants//tenant-user-api/ This API space is reserved for tenant users.
/app.example.com/tenants//public-api/ Anyone can access this API as long as they provide valid credentials through social networking sites or other supported authentication services.
/app.example.com/tenants//collaboration-api/ Only tenant partners can utilize this API.

In this scenario, let’s also assume that the IDP for the SaaS provider is example-idp.

There are two tenants: XYZ and ABC, with their respective IDP services being XYZ-idp and ABC-idp. Each tenant also has two partners, each with their own IDP service. XYZ-P1-idp and XYZ-P2-idp are IDP services of XYZ partners. ABC-P1-idp and ABC-P2-idp are IDP services of ABC partners.

Furthermore, XYZ tenant requires authentication via Google and Facebook for access to the public API space, while ABC tenant prefers authentication through LinkedIn and GitHub.

The following authorization policies are needed in ZTNA to address the above scenario:

  1. Domain = app.example.com; user-role=app-admin; authservice=example-idp; uri = /service-admin-api/* ALLOW: Allow access to any user who has successfully logged in to the example-idp service and possesses the app-admin role for all resources under the admin-api of the application with the domain app.example.com.
  2. Domain = app.example.com; user-group=admin-group; authservice=XYZ-idp; uri = /tenants/XYZ/tenant-admin-api/* ALLOW: Allow access to any user who has successfully logged in to the XYZ-idp service possessing the admin-group role for all resources under the XYZ/tenant-admin-api.
  3. Domain = app.example.com; user-role=admin-role; authservice=ABC-idp; uri = /tenants/ABC/tenant-admin-api/* ALLOW: Allow access to any user with the admin-role, authenticated with the ABC-idp service, accessing the ABC/tenant-admin-api resources
  4. Domain = app.example.com; authservice=XYZ-idp; uri = /tenants/XYZ/tenant-user-api/*, /tenants/XYZ/collaboration-api/*, /tenants/XYZ/public-api/* ALLOW: Allow access to resources specified in the rule for any user that was successfully authenticated with XYZ-idp service
  5. Domain = app.example.com; authservice=ABC-idp; uri = /tenants/ABC/tenant-user-api/*, /tenants/ABC/collaboration-api/*, /tenants/ABC/public-api/* ALLOW: Allow access to resources specified in the rule for any user that was successfully authenticated with ABC-idp service
  6. Domain = app.example.com; authservice=XYZ-P1-idp; uri = /tenants/XYZ/collaboration-api/*, /tenants/XYZ/public-api/* ALLOW: Allow access to XYZ collaboration space for users authenticated with XYZ-P1-idp service.
  7. Domain = app.example.com; authservice=XYZ-P2-idp; uri = /tenants/XYZ/collaboration-api/*, /tenants/XYZ/public-api/* ALLOW: Allow access to XYZ collaboration space for users authenticated with XYZ-P2-idp service.
  8. Domain = app.example.com; authservice=ABC-P1-idp; uri = /tenants/ABC/collaboration-api/*, /tenants/ABC/public-api/* ALLOW: Allow access to ABC collaboration space for users authenticated with ABC-P1-idp service.
  9. Domain = app.example.com; authservice=ABC-P2-idp; uri = /tenants/ABC/collaboration-api/*, /tenants/ABC/public-api/* ALLOW: Allow access to ABC collaboration space for users authenticated with ABC-P2-idp service.
  10. Domain = app.example.com; authservice=google.com; uri = /tenants/XYZ/public-api/* ALLOW: Allow access to XYZ public-api space for all users authenticated with google.com.
  11. Domain = app.example.com; authservice=facebook.com; uri = /tenants/XYZ/public-api/* ALLOW: Allow access to XYZ public-api space for all users authenticated with facebook.com
  12. Domain = app.example.com; authservice=linkedin.com; uri = /tenants/ABC/public-api/* ALLOW: Allow access to ABC public-api space for all users authenticated with linkedin.com
  13. Domain = app.example.com; authservice=github.com; uri = /tenants/ABC/public-api/* ALLOW: Allow access to XYZ public-api space for all users authenticated with github.com
  14. Domain = app.example.com; DENY: Deny access to the application if none of the above rules match.

SASE solutions excel at attribute-based access control. This means that they handle authorization functionality well. However, they are not very comprehensive when it comes to authentication. In the policies above, different levels of access are granted based on the identity provider (IDP) service that users choose to authenticate with. Also, some users may deliberately want to authenticate with a specific IDP service to access resources with minimal permissions to avoid potential data exfiltration mistakes.

Role of Identity Brokers

To address such scenarios, the integrated functionality of an identity broker is required. Identity brokers serve as OIDC (OpenID Connect) providers to the SASE/SSE proxy component while acting as OIDC/SAML/LDAP clients to the upstream identity services (authentication services).

Keycloak, an open-source IAM system, is a popular choice for many. It can be configured to fulfill the role of an identity broker and is commonly used by SASE service providers and service mesh product vendors. Hence, Keycloak terminology is used here. Keycloak offers the flexibility to handle authentication for various types of applications, including multi-tenant SaaS applications.

Authentication for multi-tenant SaaS applications can be achieved using ‘identity brokers’ in the following manner:

One realm with one client for each SaaS application with modified authentication flows:

In cases where the application-tenant cannot be identified from the URL path or HTTP request headers, the SASE proxy component can have only one OIDC client to communicate with the identity broker. During user authentication, the identity broker needs to know which IDP service to authenticate the user against. Keycloak provides standard authentication flows such as browser flow and allows the creation of customized flows and associates with Keycloak clients. SASE leverages this feature by creating authentication flows where users are prompted to provide tenant information. Based on this information, the authentication flows can present the available identity providers for the user to select from. With this information, the broker can redirect users to the appropriate identity service.

One realm with multiple clients for each SaaS application:

If the application-tenant can be identified from the URL or HTTP request headers, the SASE proxy component can be configured to use one client for each application-tenant. In this case, standard browser flows with different sets of identity providers can be employed and associated with the corresponding client entities in Keycloak. The advantage of this is that the user is not prompted to give the tenant name, hence better user experience.

In summary, these strategies empower SASE solutions to effectively handle authentication for multi-tenant SaaS applications, leveraging the capabilities of Keycloak as an identity broker.

Policy-based OIDC Client Selection

The Keycloak broker offers support for multiple realms and multiple clients within each realm. It enables standard authentication flows, the creation of custom authentication flows, and the association of these flows with clients. The Keycloak broker functionality also allows for the brokering of authentication sessions between user-side authentication mechanisms and backend (upstream) authentication services. We have previously discussed how Keycloak can prompt users to identify their application-tenant and select the identity service for authentication.

These capabilities should also be leveraged by the SASE proxy, which acts as an OIDC client (also known as OIDC relay) for various customer applications, including multi-tenant applications.

The SASE proxy needs to support multiple OIDC clients. One approach is to have a set of OIDC clients for each customer, ensuring that customer-specific authentication-related configurations are isolated from others. Typically, each SASE customer’s OIDC set is associated with a realm in Keycloak.

In scenarios where a customer of the SASE proxy has multiple applications, each with its own domain name, it becomes necessary to provide isolation among multiple application administrators. In such cases, a subset of OIDC clients should be configured, with one client assigned to each application.

For many applications, a single OIDC client suffices if they are single-tenant application or if the tenant cannot be identified from the traffic, as discussed earlier. However, if the tenant can be identified, one OIDC client can be configured for each application-tenant.

Due to the requirement for multiple OIDC clients, the SASE proxy should offer a mechanism for selecting the appropriate OIDC client. This is where policy-based OIDC selection becomes crucial.

A policy table with multiple policies is utilized, with each policy pointing to the corresponding OIDC client record. During the traffic flow, the SASE proxy checks whether OIDC authentication is required and then matches the customer, application domain name, and application-tenant against the policies in the table. If a match is found, the corresponding OIDC client record is used to communicate with the broker. Some implementations may have multiple policy tables, with one table dedicated to each customer, to expedite the policy matching process.

NextGen ZTNA will adapt to multi-tenant applications

ZTNA (Zero Trust Network Access) within SASE (Secure Access Service Edge) solutions play a crucial role in securing applications. It enables the offloading of authentication and authorization tasks from applications, allowing developers to focus on their core business logic. This approach enhances productivity and bolsters overall security.

Authentication bypass and privilege escalation vulnerabilities are common in applications, as not all developers have expertise in security. Offloading security can eliminate these vulnerabilities, ensuring stronger application resiliency.

Centralizing security in a commonplace, such as SASE, simplifies the work of security administrators, who only need to manage a single interface for all applications.

To achieve both security and flexibility, the next generation of ZTNA within SASE solutions should address diverse application types. Many existing ZTNA solutions often struggle to support multi-tenant applications effectively. Future enhancements are expected to incorporate identity broker functionality and policy-based OIDC (OpenID Connect) client selection to cater to a wide range of application scenarios.

  • CTO Insights blog

    The Aryaka CTO Insights blog series provides thought leadership for network, security, and SASE topics. For Aryaka product specifications refer to Aryaka Datasheets.

The post Enhancing SaaS Security: Next-Gen ZTNA for Authentication & Authorization appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/next-gen-ztna-for-authentication-authorization/feed/ 0
Ensure reliable and secure network connectivity to the world’s manufacturing hubs China and India https://www.aryaka.com/blog/connectivity-challenges-for-china-and-india/ https://www.aryaka.com/blog/connectivity-challenges-for-china-and-india/#respond Wed, 24 May 2023 13:38:53 +0000 https://www.aryaka.com/?p=45100 The world has become much smaller in the past few decades thanks to globalization. Globalization has long been a core driver for digital transformation for enterprises. Connecting offices, factories, and supply chains needs a digital-first mindset and infrastructure. With the pandemic’s end officially declared by the WHO [1] and the reopening of China, recent data […]

The post Ensure reliable and secure network connectivity to the world’s manufacturing hubs China and India appeared first on Aryaka.

]]>
Top Five Connectivity Challenges for China and India and How to Overcome Them

The world has become much smaller in the past few decades thanks to globalization. Globalization has long been a core driver for digital transformation for enterprises. Connecting offices, factories, and supply chains needs a digital-first mindset and infrastructure. With the pandemic’s end officially declared by the WHO [1] and the reopening of China, recent data released by the IMF predicts that Asia is poised to drive global economic growth[2] .

China, the global manufacturing hub

With its high economic growth over the past decades and the strategic importance of the Chinese market, China remains at the top of the list for companies to expand and invest in internationally. Combined with the massive shift in the production of goods, China has become the dominant country in manufacturing. However, this has come with challenges in navigating the regulatory and technological environment.

Aryaka has held strategic partnerships with leading Chinese data center and telecom providers Alibaba and others to operate our in-country PoPs (Points-of-Presence) in Beijing, Shanghai, Shenzhen, and Hong Kong to ensure full compliance with all privacy and data security laws and regulations. Our strategic partners and their affiliates comply with all applicable laws and regulations and maintain necessary permits, licenses, and approvals. For years, Aryaka’s customers have benefited from our global HyperScale PoP infrastructure and integrated WAN optimization and SaaS acceleration for reliable and fast connectivity for onsite and remote users. A few examples of how Small-Medium and Large enterprises leverage our SD-WAN and SASE as service solutions are Social Media platform firm KAWO, Logistics company Transitex, Architecture firm Callison RTKL and Chemical manufacturing company Albemarle.

India to challenge China’s dominant position

At the same time, geopolitical changes and the high growth of India’s economy, population growth, and investment in attracting foreign companies to expand into India are a challenge to China’s dominant position in the global market and an opportunity for companies to diversify their footprint. For many years, India has produced world-class business services companies next to network and software engineers. Aryaka has had a presence in India since our founding 14 years ago and operates four PoPs in the major economic hubs of New Delhi, Chennai, Mumbai, and Bangalore. Companies are within milliseconds of one of our PoPs wherever they choose to invest and build their offices and factories on the Indian subcontinent. Premium Sound Solutions has become one of the world’s leading companies in automotive and consumer sound products. PSS operates facilities worldwide, including China, and has a sales office in India. The Belgium headquartered company relies on our managed services to improve Disaster Recovery and network performance.

Top Five Challenges for Connectivity for China and India

China and India together are forecasted to generate about half of global growth this year [2] and be critical countries for international companies as part of their future global supply chain and operations. Keeping or developing business operations in each country has huge potential benefits.

However, enterprise network connectivity in China and India presents local challenges, ranging from the availability and quality of Internet connectivity and access to cloud-based workloads and SaaS applications to providing proof of compliance with local regulations. Poor or unstable Internet connectivity often leads to high latency and packet loss, unreliable access to the cloud, and SaaS impedes productivity, while a lack of compliance risks overall business operations and the delivery of network and security services.

Based on our longstanding experience in operating in both countries, we identified five recurring challenges for enterprises to address. I highlight key aspects of each one in this blog, while this whitepaper explores global enterprises’ top five challenges in securely connecting applications and workloads with employees, sites, customers, and suppliers in China and India.

Challenge 1: Application performance

The combination of regular Internet performance issues and high regulatory compliance creates significant challenges for international businesses to connect their users and mission-critical applications.

Aryaka Solution: Several PoPs in key business metros in China and India provide low latency access and dedicated connectivity to deliver on an optimal network and application performance with consistent SLAs.

Challenge 2: UCaaS and enabling global collaboration

The need for communication and collaboration tools like Microsoft Teams, Webex, Zoom, and others continues to accelerate globally. Enabling employees to be their most productive and securely connecting them to the enterprise WAN, no matter where they are located, is of the utmost importance.

Aryaka Solution: Our in-region PoP footprint and multi-segment WAN optimizes connectivity to the different UCaaS/CCaaS gateways within China, India, and internationally. Voice and Video traffic is given highest QoS priority with guaranteed bandwidth allocation to meet user expectations for productivity.

Challenge 3: IP-Based applications

Reliable access to websites and web applications is foundational for any enterprise, so IT Ops must know how to navigate China’s or India’s unpredictable Internet. One frequently proposed solution is a Content Delivery Network (CDN). Still, CDNs have issues supporting business application performance and user expectations due to a reliance on the public Internet.

Aryaka Solution: Our global, scalable WAN – based on a single-pass architecture – offers reliable performance and flexibility to support any application IT deploys, including dynamic IP-based applications, versus optimizing for specific content and sources/destinations.

Challenge 4: Remote worker connectivity

Hundreds of millions of employees in China and India, anywhere really, work, at least partially, remotely. The hybrid workplace is here to stay. The mandate for the CIO and IT is to enable these ‘anywhere’ workers with secure and reliable access to the web and corporate applications and workloads wherever they reside.

Aryaka Solution: Our secure remote access solution, Private Access, deployed in all Hyperscale PoPs, including China and India-based PoP delivers flexibility and security. With the aggregation of traffic from branch and remote users and delivery of common services with consistent network and security policies at our scalable PoPs versus a siloed architecture and point solutions, enterprises benefit from our approach.

Challenge 5: Compliance

In addition to relying on the Internet for connectivity, or a legacy WAN architecture from a managed service provider that is not cloud-ready, is less than ideal. And as mentioned earlier, foreign businesses can face complex compliance rules and requirements in India and China. Establishing a local presence, especially in China, can be difficult. Companies must balance legal and technical needs.

Aryaka Solution: Our strategic partners and their affiliates comply with all applicable laws and regulations and maintain the necessary permits, licenses, and approvals to deliver on this requirement. Our global PoP footprint and dual-layer core backbone remove the unpredictable nature of the public Internet.

Aryaka Regional Asia PoP Footprint and Cloud Onramps

We operate a global core network consisting of a dual-layer backbone with PoPs on six continents providing optimal cost and performance connectivity to and from China and India and beyond. Our Network Architecture whitepaper goes into details about the setup of our PoP and global backbone with onramps to hundreds of cloud resources.

In Conclusion

No matter where enterprises set up their manufacturing presence, our longstanding expertise, experience and partnerships in China and India, combined with our global network and security architecture delivered as a managed service, provide enterprises a trusted partner for their SD-WAN and SASE as a service.

Download our paper Addressing the Top Five Connectivity Challenges for China and India to learn more.

[1] https://news.un.org/en/story/2023/05/1136367

[2] https://www.imf.org/en/Blogs/Articles/2023/05/01/asia-poised-to-drive-global-economic-growth-boosted-by-chinas-reopening

The post Ensure reliable and secure network connectivity to the world’s manufacturing hubs China and India appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/connectivity-challenges-for-china-and-india/feed/ 0
Aryaka AppAssure: New frontiers in delivering Application Experience https://www.aryaka.com/blog/aryaka-appassure-on-application-performance/ https://www.aryaka.com/blog/aryaka-appassure-on-application-performance/#respond Wed, 15 Dec 2021 13:00:43 +0000 https://www.aryaka.com/?p=35848 Role of WAN in delivering App Performance In my previous blog, I argued that Overlay SD-WAN solutions have failed to deliver on their intended promise of providing a reliable application performance. Overlay SD-WAN’s approach to keep the transportation network at arm’s length fails to ensure application performance. The simple reason is that application performance depends […]

The post Aryaka AppAssure: New frontiers in delivering Application Experience appeared first on Aryaka.

]]>
Aryaka AppAssure - Application Experience

Role of WAN in delivering App Performance

In my previous blog, I argued that Overlay SD-WAN solutions have failed to deliver on their intended promise of providing a reliable application performance. Overlay SD-WAN’s approach to keep the transportation network at arm’s length fails to ensure application performance. The simple reason is that application performance depends on network performance, and if you treat the network as a black box, you cannot deliver on application performance either.

In this blog, I will explore key features of the WAN solution that solves application performance issues for enterprises.

What does it mean to deliver application performance in the context of the WAN? Simply put, the WAN should not be a bottleneck in delivering the application experience to the users. Users’ application experience should be no different whether an application is distributed across public and private clouds or served from a single server on the LAN.

WAN Characteristics to deliver App Performance

Ability to deliver application performance could be compared to three-legged stool – Robust transport network, Analytics and observability of network and applications end-to-end and ability to remedy. To expand it further, the WAN solution needs some key capabilities to successfully deliver application performance.

Correlation between Network Performance and Application behavior

Application issues are routinely blamed on network and as one of our customers put it, observability and correlation is key to achieve faster “Mean Time To innocence” for IT network teams! The first step in delivering application performance is the ability to measure and monitor and co-relate with network performance. When user is troubleshooting application performance issue, the sooner you identify the issue, the sooner you can resolve it.

Observability:

Every WAN solution claims observability. But typical overlay SD-WAN solution has full visibility at the edge. But once traffic is handed over to underlay network, the visibility disappears. So being able to monitor the application at every segment in the network is the key. Users should be able to monitor applications from a network perspective and monitor the network from an application perspective and how they impact each other. Modern observability techniques that allow users to analyze application traffic in different dimensions; slice and dice it as needed are also very important in comprehending and delivering application performance.

Total control over the transport network:

Once an issue is identified that is causing a poor application performance, remedy often requires granular and total control over every segment of the transport network – first mile, middle mile or last mile.

Per-Application Steering, Optimization and SLA:

Overlay SD-WAN are terrible or almost deceptive in their claim of providing Application SLA. Packing multiple applications in wide moat of Class-of-service and then outsourcing the policy and SLA enforcement to underlay network, does not do anything for Application performance. It is coarse tool that does not yield any benefits in current complex application environment.

With diverging application architecture, WAN should be able to steer individual application to different destinations – remote site or data center within the enterprise network, SaaS or IaaS service, Cloud security provider or simply to direct Internet access.

Various optimization techniques such as WAN acceleration including TCP optimization, SSL and other proxy-based optimization improve application performance significantly. These techniques should be available per-application.

Finally, ability to define per-application SLA and guaranteeing it across multiple segments in the network is needed to ensure superior application performance.

Simplified Workflows:

Monitoring, configuration or troubleshooting workflows should be all integrated in a simple and single tool with closed-loop workflows and instantaneous feedback. Users should be able to manage a global enterprise network and applications from a single pane of glass. As discussed in my previous blog, tool sprawl has become a major concern for enterprise IT teams, so minimizing the number of tools is another hallmark of a successful WAN solution.

Co-management:

An enterprise’s application environment is dynamic. With 100s of applications running in the network, application requirements change constantly. Enterprises cannot depend on Managed Service Provider or Telco to keep up with requests and respond to them rapidly. Enterprise’s need expanded co-management capabilities, allowing them to manage the application environment with confidence. The co-management capabilities need to be intuitive and fool proof, which will give confidence to IT teams and CIOs that application performance is in safe hands with proper tools.

Self-healing networks and application automation:

Beyond basic link or path failover, the network should be able to per-application steer trafic automatically around degraded or failed devices, links, network segments and WAN paths to maintain application delivery and SLA.

Anomaly detection and predictive analytics capabilities within WAN should be able to automate application behavior and optimization.

SDWAN overlay architecture vs aryaka cloud first architecture

All this requires the right architecture. The pre-requisite is a Cloud-First WAN with integrated overlay and underlay built from grounds-up. The Cloud-First WAN’s ability to observe and control every segment of network sets it apart from overlay SD-WAN solution. That is why even though it sounds simple, it has been incredibly hard to do for Telcos, DIYs and MSPs with their stitched overlay SD-WAN solutions from multiple technology vendors and network providers. Application Performance at layer-7 demands certain network performance at layer-2 and layer-3. When you have an application-aware network with integrated overlay and underlay, it is possible to understand the application’s needs and fulfill them.

Aryaka AppAssure: Ensuring Application Experience

Aryaka’s Cloud-First WAN architecture with our recently announced FlexCore is perfectly positioned with all the above-mentioned characteristics to deliver on reliable and consistent application performance. Further with the recent launch of AppAssure, Aryaka can deliver unparalleled application experience to its customers. Integrated with Aryaka’s Cloud-First WAN, Aryaka AppAssure delivers application observability and co-management capabilities with simplified workflows to optimize, monitor and ensure application performance. For more information, see our Solution Brief.

As argued in my previous blog, as opposed to Overlay SD-WAN solution, Aryaka’s AppAssure integrated with our Cloud-First WAN, not only solves inflexibility and high cost of MPLS-based solution but also delivers on application experience that an overlay SD-WAN failed to resolve.

In my next blog, I shall deep dive into AppAssure solution to demonstrate how it manages to deliver outstanding Application experience to enterprise users.

Aryaka AppAssure Visibility

Book a Demo

Book a demo with a Aryaka expert to see how Aryaka AppAssure can help your organization to reduce tool sprawl and eliminate swivel chair ops.

Additional Resources

Solution Brief – Aryaka AppAssure: Breakthrough Network and Application Visibility and Control

Blog Part 1 – SD-WAN Overlay – A broken promise of (not) delivering application performance

Datasheet – Aryaka SmartConnect EZ and Pro

The post Aryaka AppAssure: New frontiers in delivering Application Experience appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/aryaka-appassure-on-application-performance/feed/ 0
Aryaka’s Flexcore – Is it a Lexota? https://www.aryaka.com/blog/aryaka-flexcore-architecture/ https://www.aryaka.com/blog/aryaka-flexcore-architecture/#respond Thu, 09 Dec 2021 12:57:12 +0000 https://www.aryaka.com/?p=35747 You don’t have to be a big automobile afficionado to know that every major auto brand has a luxury line with its own distinct branding. For instance, Toyota has Lexus; Honda has Acura; Nissan has Infinity; Volkswagen has Audi and so on. The reasoning behind the separate brand strategy is simple. It helps differentiate the […]

The post Aryaka’s Flexcore – Is it a Lexota? appeared first on Aryaka.

]]>
Aryaka’s Flexcore – Is it a Lexota?

You don’t have to be a big automobile afficionado to know that every major auto brand has a luxury line with its own distinct branding. For instance, Toyota has Lexus; Honda has Acura; Nissan has Infinity; Volkswagen has Audi and so on. The reasoning behind the separate brand strategy is simple. It helps differentiate the product (and service) offerings better. Having owned a Toyota car and a Lexus SUV I am aware of the differences in both product and service. Lexus is a better designed car, has nicer looking and expensive leather seats, better looking finishes and even premium paint. When it comes to service checks, the experience is different too. The Lexus service advisor ensures all my service needs are met. The customer service lounge, while waiting, has nicer amenities. On the other hand, the Toyota service, while good and efficient, lacks the pampering and extra touches of the Lexus brand and of course the price of both the product and service differs as well! I wish the oil change charges that I paid at Lexus was the same as Toyota’s.

In the world of managed network services, Aryaka has always offered a superior “Lexus-like” experience for its customers. If you don’t believe me because I work there, then look at our Gartner Peer Insights Reviews to hear what our customers say about us or look at our Net Promoter Score (NPS) rating which is an unbelievable 65. Rest of the industry is below 15. That puts us in a nice “niche” category, but we lacked the “Toyota-like” mass-appeal which is about to change now.

EZ is Easy!

We are offering a brand new SmartConnect-EZ product offering over a private Layer-3 core.  This is cost-optimized and will widen Aryaka’s appeal to the “Small and Medium” segment of the Enterprise market. We continue to offer our premium SmartConnect-Pro offering that runs on a Layer-2 core for our more discerning customers who want to deliver the absolute best network for the best possible application performance. The Pro offers the best application performance because we do TCP optimization at the network level, application optimization using SSL, CIFS proxies and WAN optimization features like de-duplication and compression. All from our POPs (Points of Presence), without the need for any external appliances.

For the cost-optimized SmartConnect-EZ product line, these optimizations are disabled.

What?

Yes, you read that right. No WAN optimization on EZ. Zilch, zero, nope, nada.

But the Layer-3 core is not without its unique advantages. We hand-pick Tier-1 ISPs for the long-haul meshes over the Internet. We not only ensure that customer’s traffic reaches from point A to point B over the lowest possible latency link but also has the best network characteristics to deliver an optimal performance. Aryaka deals with over 200 ISPs all over the world and we are aware of which provider delivers the best network. We leverage this experience to deliver the best value for our customers. One common trick that all major ISPs employ is prioritizing ICMP traffic to make them “look good”. They know very well that customers typically perform “ping” tests to check latencies. Ping runs on ICMP and by prioritizing ICMP the ISP looks “good” when unsuspecting prospects do ping tests to evaluate one ISP over the other. While they may pass your ping test with flying colors, the real test is what happens to your critical application data when it traverses their network. The internet works on peering and one can view the peering hierarchy as a parent-child relationship. Tier-1 ISPs (usually the big names) peer with one another. Tier-2 and Tier-3 ISPs peer with their corresponding Tier-1 provider. As your traffic goes from point A to point B over the internet, unbeknownst to you it crosses several peering junctions. A Tier-1 provider might have resources to avoid choking or over-subscription, but you cannot say the same for any of the lower tier providers. And when it comes to preference, almost all of them will give higher preference to paying customers like Aryaka as we buy routes directly from the Tier-1 ISPs. So the next time an under-sea cable gets cut, guess whose traffic will have to take the longer detour –Aryaka customer’s or the average home broadband user’s?

Silver or Gold?

For support and services, we are introducing a new tier called the “silver-tier” for our EZ products and the existing Pro service offering is renamed as “gold-tier”. The silver tier comes with a tad bit slower response time and longer review cycle (quarterly for the gold vs annually for the silver) and minus a few other things like a dedicated support personnel etc. But rest assured, when an issue crops in your network, our lion-hearted support team is always there to get you back up and running!

So, in some ways, one could say we have a complete product and service offering – a higher end “Pro” and the lower-end “EZ”, like the 2-brand strategy of leading car companies.

But I want both!

This is where our similarities end as we have added a unique twist. We like to call it the “Flexcore” strategy. Unlike the binary experience of the car companies, we would like to offer the choice to our customers. Let them decide what kind of product and service offering they would like. In almost all Enterprises you are likely to have different needs. For instance, not all applications have the same sensitivity level for packet loss, latency, and jitter.  And not all sites are similar. Some sites like HQ or DC or big branch locations might have a different network requirement than a smaller site or a remote office or an individual home office. Likewise, not all workers have the same network needs. A CAD engineer working on product design at a manufacturing company requires a certain network response to render the high-resolution images from the cloud or Data Center while another colleague in the very same location could be working on a productivity application like Microsoft 365 which works perfectly fine over plain old internet.

Aryaka Flexcore Architecture

We at Aryaka with our unique capabilities can make this happen. With AppAssure and Deep Packet Inspection, we can treat every application at every site for every worker differently. We can choose to steer for instance, an SAP database transaction which is sensitive to loss, latency and jitter over the Layer-2 core while steering a personal productivity application over the internet or over our Layer 3 core. This is what we refer to as our “Flexcore” architecture. Or even better, you could start off deploying all applications over the Layer-3 core and selectively choose the ones which require a superior network onto the Layer-2 core. All with a few mouse clicks! Try doing that with any other Managed Network provider!

Such Flexibility is impossible with traditional SDWAN, where best effort internet connectivity is combined with MPLS, enabled by establishing an overlay network. MPLS is inherently inflexible and will not allow for the mixing and matching of Aryaka’s Flexcore architecture. Also, overlay networks with path steering are not structurally addressing performance issues and only use guessing algorithms to pick the best path. The Flexcore architecture provides the deterministic performance of MPLS with superior levels of agility and flexibility. Who doesn’t want to instantly move between a Toyota and Lexus without a hitch?

We also offer a service upgrade SKU from silver to gold for your EZ site if you so wish to have that highly sought after “white-glove” Aryaka service experience for a nominal cost.

So, unlike the car companies, where you have a binary choice of a Lexus or a Toyota experience, with Aryaka we have de-coupled the product and service to offer you a wide range of choice for your different applications and site needs. Should we refer to our Flexcore then as Lexota?

Our goal at Aryaka is to help network managers improve their Mean Time to Innocence (MTTI). Network managers almost always get blamed for application performance issues and this has gone on for far too long. With Aryaka by their side, network managers should be able to wear the T-shirt that says “It is NOT the network, stupid!”.

The post Aryaka’s Flexcore – Is it a Lexota? appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/aryaka-flexcore-architecture/feed/ 0
SD-WAN Overlay – A broken promise of (not) delivering application performance https://www.aryaka.com/blog/application-performance-challenges-with-sdwan/ https://www.aryaka.com/blog/application-performance-challenges-with-sdwan/#respond Mon, 29 Nov 2021 13:34:00 +0000 https://www.aryaka.com/?p=35426 Expectations from WAN solutions and their ability to deliver application performance has evolved over the last decade as the application landscape and WAN solutions themselves have gone through a sea change. A Brief Look At Past and Present During the early 2010s, MPLS became the preferred solution to deliver WAN connectivity between branch offices, headquarters, […]

The post SD-WAN Overlay – A broken promise of (not) delivering application performance appeared first on Aryaka.

]]>
SD-WAN Overlay - A broken promise of (not) delivering application performance

Expectations from WAN solutions and their ability to deliver application performance has evolved over the last decade as the application landscape and WAN solutions themselves have gone through a sea change.

A Brief Look At Past and Present

During the early 2010s, MPLS became the preferred solution to deliver WAN connectivity between branch offices, headquarters, and data centers for the enterprise. The enterprise application landscape was simpler. Applications were fewer, monolithic, and were hosted either at the HQ or at the data center. Robust connectivity from branch offices to both the HQ and to data centers was sufficient to guarantee application performance for the users in the branch offices. MPLS did that reliably and was able to deliver decent application performance for enterprise users.

Subsequently, the inflexibility and high cost of MPLS paved the way to an SD-WAN overlay orchestrator.  An SD-WAN overlay made the WAN networks more flexible and took advantage of lower cost Internet paths for less critical applications, while also reducing cost. Another important expectation from SD-WAN is to deliver application performance. Aryaka’s 5th annual SOTW report states that application performance has been a major driver to migrate to SD-WAN for over 30 % of respondents. Meanwhile, the enterprise application landscape has been evolving and becoming more complex in following ways:

  1. Increasing number of Applications: Typical enterprises have 100s of applications running, and the number continues to increase rapidly. As reported in Aryaka’s 2021 State Of The WAN Report, the number of enterprises with over 500 known applications has grown by almost 50%, from 32% to 47%.
  2. Distributed Application Architecture: The application architecture that used to be monolithic and hosted at single location has become distributed and hosted at various locations: on premises, in data centers, in the public cloud, etc.
  3. Distributed Users: Users are no longer restricted to offices behind private networks, rather distributed anywhere accessing applications over the Internet.
  4. Business Criticality of Applications: Applications have become central to the business. The application layer is where technology meets the business and revenue is realized in the digital economy. Any disruption to application performance or availability results in real loss of revenue or an increase in costs.

Increased complexity and business criticality has made delivering application performance hard and yet it remains crucial, more than ever! In fact, it is so important that, for enterprises, the objective of the WAN is steadily shifting from just providing robust connectivity to ensuring a consistently great application experience for its users.

A Broken Promise

SD-WAN overlay architectures eased MPLS shortcomings of inflexibility and cost. But an SD-WAN overlay architecture did not prove to be such a great solution for delivering application performance. The fundamental shortcoming has been the separation of the underlay transport network and overlay virtual orchestrator. A SD-WAN overlay orchestrator is able to perform application traffic steering over the underlay networks based on policies. But an SD-WAN overlay neither has control over underlay networks to deliver the QoS and nor visibility into the underlay network when bottlenecks occur. The main challenges that enterprise IT teams are facing in ensuring application performance can be summarized in the points below:

Lack of Visibility: The virtual overlay has no visibility into the underlay transport network. When application issues appear, troubleshooting application issues with the underlay transport network becomes much harder. A survey by Sirkin Research found out that 35% of network professionals reported poor visibility and monitoring performance across all network fabrics as a challenge or a major challenge.

Lack of Control: An Overlay SD-WAN orchestrator defines QoS policy but actual delivery of QoS is outsourced to the underlay transport network. The SD-WAN orchestrator has no control over the transport medium. In the case where the underlay network fails to deliver QoS, the orchestrator cannot fix the bottleneck rather simply and has to find alternative network that satisfies the QoS.

Complexity: Defining and applying application policies in an SD-WAN orchestrator for multiple underlay transport network is way too complex, despite vendors’ promises to the contrary. Further, Sirkin Research found out that 31% of network professionals reported spending too much time managing cumbersome workflows between critical systems as a challenge or a major challenge.

As a result, application performance has suffered under an SD-WAN overlay architecture.

More Tools Are Part Of The Problem

Enterprises augment Overlay SD-WAN with many other visibility and control tools to manage the underlay transport network(s). An EMA  survey [1] reports that over 64% of enterprises use between 4 to 10 separate tools and another 17% use more than 10 tools for network and application visibility. As a result, delivering application performance in a hybrid world has become disjointed patchwork of separate tools for the cloud, the on-premises network, virtual overlays, and underlay tunnels. This increasing number of tools has created a ‘tools sprawl’ and swivel chair environment for IT teams, creating more operational complexity and increased Mean-Time-to-Resolution (MTTR). Even with more tools, IT teams are unable to proactively identify performance issues or corelate application performance problems with underlay network issues. 38% find this to be a challenge or major challenge.

Even though SD-WAN achieved its stated goal of flexibility and cost reduction compared to MPLS, in certain measures, it made delivering of application performance worse. As a result, applications experience is one of the top CIOs concerns. Contrary to the expectations of many who migrated their legacy enterprise networks to an SD-WAN Overlay to get better application performance, it is not delivering on promise of application performance

Future Outlook

In my next blog, I will explore how a cloud-first approach to a WAN solution, without the separation of overlay and underlay networks, can close the gaps of an SD-WAN overlay architecture to ensure application experience for users of enterprise networks.

Also, don’t miss to join the Aryaka Breakthrough event on December 7 at 10 AM PST.

Resources

[1] Networkworld article – How to consolidate network management tools

Sirkin Research 2019 Top Network Performance Challenges

Aryaka’s 5th Annual Global State of the WAN

Aryaka Breakthrough Hub

The post SD-WAN Overlay – A broken promise of (not) delivering application performance appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/application-performance-challenges-with-sdwan/feed/ 0
Hate Your VPN? How to Improve Application Performance for the Remote and Mobile Workforce https://www.aryaka.com/blog/how-to-improve-application-performance-for-remote-and-mobile-workforce/ https://www.aryaka.com/blog/how-to-improve-application-performance-for-remote-and-mobile-workforce/#respond Mon, 25 Oct 2021 21:00:23 +0000 https://www.aryaka.com/?p=17654 Clunky, unreliable, and slow remote access options, like VPN, are the necessary evil that IT administrators love to hate. This vital piece of the corporate puzzle helps keep your information secure and your global remote and mobile workforce in sync. However, it also creates headaches – and high costs – for teams that want to […]

The post Hate Your VPN? <br/>How to Improve Application Performance for the Remote and Mobile Workforce appeared first on Aryaka.

]]>
VPN for Remote Access
Clunky, unreliable, and slow remote access options, like VPN, are the necessary evil that IT administrators love to hate. This vital piece of the corporate puzzle helps keep your information secure and your global remote and mobile workforce in sync. However, it also creates headaches – and high costs – for teams that want to get work done quickly and reliably across distances.

Slow access, frequent disconnects, and poor application performance are the constant complaints from end users to network/IT admins, and something that bodes poorly for productivity and success of the organizations to which those end users belong.

In today’s global business landscape, remote access to data and applications from anywhere in the world is a critical tool. The rise of a remote and mobile workforce means more employees are capable of accessing company information while on the road and away from the corporate headquarters or branch offices. Global enterprises are also partnering with companies in different geographies to outsource or optimize business processes, and they need those employees and partners to access corporate applications in remote locations as well.

Where Do VPNs Fail?

Slow VPN Affecting Performance
Remote access methods like VPNs can fail in a number of ways: They may be very slow, or they may time out. Users may be able to access the network, but find that download/upload times are excruciatingly long or have much worse performance than they would expect if they were in the office.

The fundamental problem is the Internet.

VPNs leverage this public network of networks, which has multiple bottlenecks. Like a public highway, the Internet can become congested at peak times, causing slowdowns and standstills. If you are at a long distance from your end-point, it will take you longer to get where you’re going, especially if you have a lot of data to deliver.
VPN slows down over public Internet

The unpredictable variation in latency (along with high latency), means that slowdowns in speed and performance of applications are almost inevitable. The Internet is also a lossy network, where congestion along the highway leads to packet loss, continued slowdown, and poor application performance.

How Do Companies Currently Solve VPN Problems?

The current methods for solving poor remote access performance are costly and/or ineffective. For example:

  • Wait to get Better Network Access
    When remote and mobile users experience slow VPN performance, they might just wait to get network access thinking that the Internet on the last mile may not be good enough. The wait is often futile as the problem lies in the middle mile.
  • Deploy Multiple VPN Concentrators
    When companies have a pressing need to scale, they turn to the IT team to deploy multiple VPN concentrators all over the world. This complex solution is costly and hard to manage, which introduces new problems that will hinder the success of your deployment.
  • Use Alternate Methods
    Some workers attempt to work around VPN issues by simply sending emails and asking others to deliver messages and files. This can lead to miscommunication and loss of information.
  • Use Shadow IT
    The point of using VPN is to keep information safe. When VPNs fail, some workers upload files and attempt to access information through cloud applications like Dropbox, which are not authorized by the enterprise. This is a huge security risk, as well as a lack of compliance.

A Solution for Today’s Remote Access Performance Needs

Working on tablet via VPN
Aryaka’s solution, SmartACCESS, solves the challenges of slow and unpredictable VPN performance by taking the Internet out of the equation.

SmartACCESS is the first clientless SD-WAN for remote access. It is the only solution that combines dynamic CDN capabilities with SD-WAN technology to deliver reliable, fast and predictable remote access anywhere in the world. Delivered as a cloud-based service, enterprises can deploy it in hours and scale in minutes.
Aryaka’s SmartACCESS for Mobile & Remote Workforce

Aryaka SmartACCESS combines dynamic CDN capabilities with SD-WAN technology to deliver fast and reliable application performance anywhere in the world for remote and mobile employees.

By building our own private network of 28 points of presence (PoPs) around the world, we’ve put 95% of the world’s business users  within 30ms or less from the closest end-point. These PoPs are fully meshed into a fully managed global private network.

This private network, which also can be optimized for faster application performance, bypasses the latency and packet loss issues experienced on the public Internet and replaces the need for your in-house IT team to deploy and manage multiple concentrators.

This makes your applications fast and predictable, and supports all those who require access to your corporate network– no matter where in the world they travel and work.

As a clientless solution it also allows enterprises to continue using their existing VPN technology without disrupting security models, enabling global business and expansion initiatives immediately. It supports all corporate applications, including on-premises and cloud/SaaS applications that can be backhauled over a data center connected by secure networks.

SmartACCESS enables IT to realize the full benefit of their VPN investment and ensures a more productive remote workforce.

Stop hating your VPN and start working smarter with Aryaka’s SmartACCESS. Learn more in our datasheet here.

The post Hate Your VPN? <br/>How to Improve Application Performance for the Remote and Mobile Workforce appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/how-to-improve-application-performance-for-remote-and-mobile-workforce/feed/ 0
Blame Your Network for Poor SolidWorks Performance https://www.aryaka.com/blog/blame-your-network-for-poor-solidworks-performance/ https://www.aryaka.com/blog/blame-your-network-for-poor-solidworks-performance/#respond Tue, 20 Jul 2021 12:43:14 +0000 https://www.aryaka.com/?p=32938 With even the most basic smartphones boasting a 30GB storage capacity and a whopping 500GB for the top-of-the-range Phones, have you ever pondered upon the question — how much data there is in the world? Heads up – Do not proceed if numbers make you dizzy. In 2018, a total of 33 zettabytes (ZB) of […]

The post Blame Your Network for Poor SolidWorks Performance appeared first on Aryaka.

]]>
Blame Your Network for Poor SolidWorks Performance

With even the most basic smartphones boasting a 30GB storage capacity and a whopping 500GB for the top-of-the-range Phones, have you ever pondered upon the question — how much data there is in the world?

Heads up – Do not proceed if numbers make you dizzy.

In 2018, a total of 33 zettabytes (ZB) of data was created, copied, consumed, and captured. An equivalent of 33 trillion gigabytes.

It shot up to 59ZB in 2020, and by 2025, there will be 175 zettabytes of data in the global datasphere. For reference — one zettabyte is 8,000,000,000,000,000,000,000 bits.

Arguably most of you must’ve lost me at 33 trillion gigabytes!

Big data

The Collective Online Footprint is Only Going Up!

Accredited to the pandemic, most of us are working from home, have adapted to telemedicine over seeing a doctor, consume Netflix as a staple diet to cure boredom, and seem destined to live life behind our computers, tablets, and smartphones for the foreseeable future.

Needless to say, the burden of keeping our digital life afloat lands upon the internet or say the network connectivity.

While it’s sustainable to have a sub-par Netflix experience or miss a few online classes, the same cannot be said about your business. With a globally dispersed nomadic workforce burning the candle at both ends to keep the lights on, it is not acceptable to be competing for bandwidth with a Snapchat or YouTube user.

Some Application Consume More Data Than Others

Some Application Consume More Data Than Others

Some applications can survive bandwidth starvation, most applications take a severe performance hit, and some just come to a standstill. Given these applications facilitate online collaboration via sharing large files on a regular basis, being bandwidth-intensive is in their DNA.

Take Solidworks, for example. Hailed as the primary design tool of choice for most organizations and almost 70% of engineering schools and universities, the SolidWorks family of 3D CAD applications boasts a powerful suite of tools that businesses lean on to meet their 3D modelling needs.

Enterprises rely on CAD/CAM (Computer-Aided Design/ Manufacturing) applications such as Solidworks for collaboration between design, engineering, and project teams. While the application works flawlessly over the LAN, performance is inversely proportional with distance, such as when used in a WAN environment.

However, sharing and collaborating large CAD documents with a globally dispersed team is a competitive necessity for most enterprises. The more globalized the company is, the more distributed the teams become.

Gimme your bandwith

Distance Becomes Inversely Proportional with Application Performance

Picture this — an architectural firm in California is collaborating over some design files for an under-construction building in Germany. Meanwhile, specialty parts will need to be acquired from an office in China.

Based on a recent test, I figured that the average response time for a 1000 MB file to travel from India to San Jose is approximately 20 minutes.

Poor file transfer times not only affect the end-user performance but also bog down productivity. Tasks such as editing remote CAD files or uploading local ones seem impossible if the branch offices are located in remote global locations.

Aryaka vs internet

But Wait…There’s More

The performance degradation over long distances is just the tip of the iceberg. Traditional routing and peering practices of the ISPs just do not cut it in a site-to-site deployment scenario. Then there is the problem of packet loss that serves as the breeding ground for multiple other performance degrading elements.

While some may think pumping more bandwidth at the problem may be a potential solution, it does little to address packet loss, latency, and other issues. MPLS could help, but then again, hefty costs, lengthy deployment timelines, and zero optimization benefits are a deal-breaker, given the dynamic nature of the requirement.

Does your WAN need a workout

The R in Aryaka Stands for Reliable

With Aryaka’s Cloud-First WAN, CAD/CAM applications can run up to 20x faster — even in the most remote corners of the world. SolidWorks performance benefits from a global private network that eliminates the need for backhauling and mitigates single choke points. The built-in optimization and resiliency combined with guaranteed stable core latency ensures lightning-fast connectivity and zero congestion between sites, data centers, and application instances. Below are real performance improvements experienced by a site exchanging CAD data between San Francisco and Bangkok.

R in Aryaka Stands for Reliable

Bottom Line

The time to contemplate a network upgrade is long overdue. It is no more the time to claim ignorance but to act. In the quest to stay competitive, a fully-managed Cloud-First WAN may be the most important investment you make.

In all honesty, what are your fall-back options?

  • Keep up with the sublime performance of the public internet.
  • Deploy separate optimization boxes.
  • Use alternate workarounds such as asking your peers to deliver the files and messages on your behalf.
  • Use shadow IT by trying to access mission-critical data with third-party apps such as drobox or other unauthorized applications.

Are you really willing to go through that hassle?

Find out how we helped Nvidia to accelerate their application performance by almost 80%.

You can also learn more about how we successfully helped enterprises achieve stupendous application performance in regions such as China.

Want a more personalized experience? Reach out to us for a free demo.

The post Blame Your Network for Poor SolidWorks Performance appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/blame-your-network-for-poor-solidworks-performance/feed/ 0
SAP and Connectivity: Is Poor Plumbing Killing Your Experience? https://www.aryaka.com/blog/sap-connectivity/ https://www.aryaka.com/blog/sap-connectivity/#respond Thu, 24 Jun 2021 15:42:09 +0000 https://www.aryaka.com/?p=32528 Did you know that 98% of the 100 most valued brands are SAP customers? Or that SAP systems touch almost 77% of the world’s transaction revenue? Serving well over four and a half million active customers across 180 countries, there is barely any vertical that the SAP Suite doesn’t tap into. Manufacturing, logistics, finance, sales, […]

The post SAP and Connectivity: Is Poor Plumbing Killing Your Experience? appeared first on Aryaka.

]]>
SAP and Connectivity

Did you know that 98% of the 100 most valued brands are SAP customers? Or that SAP systems touch almost 77% of the world’s transaction revenue?

Serving well over four and a half million active customers across 180 countries, there is barely any vertical that the SAP Suite doesn’t tap into. Manufacturing, logistics, finance, sales, supply chain to IoT, cloud, and everything else in between — you name it.

To fend off the competitors squeezing into the expanding competitive crevices of their market and considering rapid tech advancement as the backdrop for the foreseeable future — SAP banks on innovation for staying ahead of the curve.

No. I am not making a generic statement. SAP invested more than €4.2 billion in research and development in 2019. You believe me now?

SAP investment in R&D

Say Hello to SAP S4 HANA

For readers who are not aware, HANA stands for High-Performance Analytical Appliance. Allow me to translate.

The SAP HANA platform paves the way for a new class of real-time analytics and applications in addition to existing SAP tools. It leverages an in-memory database, which is orders of magnitude faster than any traditional database running on spinning media and is designed to instantly analyze vast clusters of data as it is created, negating the need for complex data management layers and storage.

Its USP? It injects a mammoth amount of data from numerous endpoints, including but not limited to:

  • UX/UI data coming from the websites
  • Data from the mobile workforce
  • The IoT devices and the machine learning units
  • The NetWeaver stack that constantly talks back and forth to the HANA database.

Including multiple other non-traditional data sources, SAP HANA lets user access a vast volume of structured and unstructured data instantly, with near-zero latency, allowing them to query data in an instant, on-demand as and when needed.

SAP HANA NetWeaver

Poor Plumbing Killing the Experience?

Do you see that water tank sitting on top of your house? Ideally, it holds enough water to comfortably take care of your routine chores for a few days. How good is it though if your plumbing system is not up to the mark? All that water sitting in the tank is no good if it doesn’t flow into the kitchen tap. The case with SAP and network connectivity is no different.

Your SAP Database is the water tank, and the network connectivity is your plumbing system.

SAP connectivity compared with a plumbing system

Traditional networks did a fine job back when the connectivity requirements were straightforward. When workplaces had well-defined perimeters, and applications such as SAP used to stay put in private data centers.

But then multiple trends caught up, and users started spreading. Not to forget, the COVID catastrophe only fueled the situation. User reliance on data also quadrupled within the span of a few years.

Consider IoT tech, for instance. Not does this generate a massive amount of data, but it also needs to be shared it with numerous applications.

The same goes for SAP. Users use their SAP applications from pretty much everywhere. Therefore, even SAP HANA needs a robust network to facilitate data replication at numerous sites — especially remote ones. There is also the security side to it, but we will save that story for another day.

What Is It Going to Be?

By 2023, almost 80% of SAP users will fully or partially move to the cloud. If you’re reading this blog, there are high chances that your organization is already contemplating it. (If they haven’t switched already). So, how do you plan on fixing that plumbing system?

The Public Internet?

Why SAP is unreliable on public internet

Apart from the fact that your business traffic will be competing for bandwidth with videos of cats and dogs, there are multiple other reasons why it’s not a great idea.

The internet is the breeding ground for latency, packet loss, and jitter. What it means in English is that it fails to keep up with huge file transfers and large numbers of data packets being sent by SAP web applications. As the inconsistent latency kicks in, it disrupts the throughput, even over smaller distances, due to network congestion and network peering policies.

The result? Dropped data, slower transmissions, connection time-outs, and mediocre SAP web application performance

MPLS?

SAP with MPLS

Not only does the rigid nature of MPLS goes against the founding ideology behind SAP HANA, which is to decentralize the SAP presence and make it available anywhere and everywhere; it is also hard for a tool that banks heavily on the to-and-fro movement of data between different operational units and remote workers, to work with the hub and spoke architecture of the MPLS.

This architecture inadvertently overwhelms the network with data backhauling, causing the traffic to “trombone,” resulting in an inefficient route that increases the distance between the user and his or her application. This in addition to MPLS flexibility and scalability limitations.

Think Cloud-First

Connecting your branch offices to SAP HEC does not have to be difficult. What if there was an easy way to connect directly to and between all your SAP instances, without MPLS, complicated appliances, or the need for peering?

Just like HANA, Aryaka, the Cloud-First WAN was built from the ground up on cloud-first principles. It lets users connect to their SAP HEC instances in 30 milliseconds or less, securely, from anywhere in the world.

Simplified SAP HEC connectivity

Do you want to know more about how we do what we do? Read our solution brief on SAP HANA to know more about the intelligence that Aryaka binds with your SAP applications — especially the S4 HANA.

You can also learn how we helped a US-based specialty chemical manufacturing enterprise with 5,000+ employees deal with the abysmal performance of their SAP Suite.

Want an in-depth perspective of the SAP landscape? Check out our webinar on optimizing SAP HANA Performance with Aryaka, The Cloud-First WAN.

Need to move faster? Request a free-demo.

The post SAP and Connectivity: Is Poor Plumbing Killing Your Experience? appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/sap-connectivity/feed/ 0
A Path Forward for CIOs: Gartner on Architecting Internet Performance and Aryaka’s Cloud-First WAN as an Optimal Solution https://www.aryaka.com/blog/path-forward-for-cios-architecting-internet-performance/ https://www.aryaka.com/blog/path-forward-for-cios-architecting-internet-performance/#respond Tue, 19 Jan 2021 13:47:43 +0000 https://www.aryaka.com/?p=29305 Recently, Gartner published a foundational document on optimizing internet performance (How to Architect Your Network to Optimize Internet Performance and Reliability, Published 29 December 2020 – ID G00731192).  Many of you may have access to this.  Why I say foundational is that it ties together many of the themes that are top-of-mind for CIOs and […]

The post A Path Forward for CIOs: Gartner on Architecting Internet Performance and Aryaka’s Cloud-First WAN as an Optimal Solution appeared first on Aryaka.

]]>
cios architecting internet performance

Recently, Gartner published a foundational document on optimizing internet performance (How to Architect Your Network to Optimize Internet Performance and Reliability, Published 29 December 2020 – ID G00731192).  Many of you may have access to this.  Why I say foundational is that it ties together many of the themes that are top-of-mind for CIOs and network planners, and that at Aryaka we totally embrace.

One of the trends in WAN evolution is the ability to leverage a hybrid environment, combining multiple technologies.  In our case, Aryaka customers leverage our private core – more on that in a bit – as well as MPLS and broadband internet aka DIA.  It is this latter option where enterprises sometimes run into problems, not fully understanding application performance implications of a non-SLA driven link, globally or even regionally.   Gartner makes a bold statement: “Using the internet for network connectivity can lower cost and improve application

performance by reducing latency, despite its lack of predictability and centralized support.”   So how does this work given lack of SLAs and where enterprises “become responsible for assuring reliability and performance”?    The document poses the question: “How can I use the internet to carry my application traffic, while ensuring consistent performance, visibility, and reliability?” There are multiple parts to the answer, including what actions can be taken across the first, middle, and last-mile.

Looking first at the middle-mile, in order to ensure end-to-end application performance, there must first be performance guarantees across this segment.  The document states: “Vendors such as Anapaya, Cato Networks, Tata Communications and Teridion offer an enhanced internet service based on an OTT internet overlay. Vendors such as Apcela, Aryaka and Mode (now part of VMware) base their deployment on a private middle mile.”   There is a critical difference in approaches here, since an OTT internet overlay suffers the issues identified above – lack of predictability and centralized support.  The Aryaka private core, leveraging dedicated resources, suffers none of these limitations.  You have an issue, you call Aryaka support as part of a fully-managed service.  As Gartner states, the ‘internet’ doesn’t have a support line!  For completeness, Aryaka does offer DIA-only connectivity for customers between sites, both globally and regionally, but this too is offered as part of a managed and fully supported service.

The first-mile, cloud connectivity, is where DIA-only options also present problems.  A basic internet service won’t include managed multi-cloud access, so enterprises must provision connectivity to any and all IaaS/PaaS/SaaS providers they leverage, a path that adds additional complexity and cost.  To effectively manage this and the various cloud services consumed, they must be domain experts for every platform and application.  Sure, they can leverage one of the cloud interconnection providers, but this is an additional piece of a complex puzzle they must manage and budget.

For the last-mile, enterprises have deep experience with ISP management, sometimes in conjunction with an aggregator.  Remember that the SD-WAN model in general calls for multiple access technologies including business internet, ‘residential’ internet, MPLS, and cellular.  Each of these options comes with advantages and disadvantages, and a multi-national, dealing with different ISPs, may not have the in-house expertise to manage this complexity.  And, they may not have access to the optimization technologies that will deliver the required performance and resiliency expected.  The solution here, as Gartner points out, is to “prefer offerings that are

bundled with an SD-WAN solution, such as Teridion, Cisco-Meraki or the all-inclusive Aryaka.”   Given that both Teridion and Cisco-Meraki are OTT, Aryaka is the only provider to offer SLAs across both the last and middle-mile.

Last but not least, visibility is identified as key to success.   Remember, you can’t manage what you can’t measure.  A DIY internet deployment, or even one front-ended by a telco or MSP, still requires end-to-end visibility in order to ensure reliability and performance.  It goes without saying that a fully managed service, offering a single point of visibility and control for all WAN connectivity options, delivers a competitive advantage and in fact mitigates some of the potential cost and management pitfalls identified.

To summarize, the Aryaka architecture provides a path to ensuring internet success:

  • Our middle-mile offers performance guarantees and a single route to support
  • Our first-mile optimizes multi-cloud connectivity, removing complexity and minimizing cost
  • Our last-mile ensures edge performance and resiliency, also hiding complexity from IT
  • Our end-to-end visibility capabilities tie this all together, permitting IT to monitor and verify end-to-end performance SLAs

By following the suggestions above, further detailed in the actual report, enterprises can ensure that their hybrid WAN deployments that include the internet will deliver on productivity, flexibility, and TCO expectations.

The post A Path Forward for CIOs: Gartner on Architecting Internet Performance and Aryaka’s Cloud-First WAN as an Optimal Solution appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/path-forward-for-cios-architecting-internet-performance/feed/ 0
Why MPLS and Cloud Applications Don’t Mix https://www.aryaka.com/blog/mpls-cloud-applications-dont-mix/ https://www.aryaka.com/blog/mpls-cloud-applications-dont-mix/#respond Tue, 03 Nov 2020 20:04:05 +0000 https://www.aryaka.com/?p=17299 If you’re in charge of delivering and maintaining a global WAN, then you know the headache that this chart can induce. Why? Because legacy WAN connectivity approaches like MPLS do not address performance challenges for cloud and SaaS applications. This was not the case 20 years ago. In the past, these types of applications lived […]

The post Why MPLS and Cloud Applications Don’t Mix appeared first on Aryaka.

]]>
If you’re in charge of delivering and maintaining a global WAN, then you know the headache that this chart can induce.

Cloud native landscape v0.9.3

Why? Because legacy WAN connectivity approaches like MPLS do not address performance challenges for cloud and SaaS applications.

This was not the case 20 years ago.
In the past, these types of applications lived in the corporate data center. All you had to do was deploy MPLS, add WAN Optimization, and you’d be set.

Why doesn’t this model work anymore?
Legacy application delivery modelThe major limitation of MPLS is that it requires a termination point for access, and you need a WAN Optimization appliance on each end in order for it to provide real application performance improvements. Deploying a device in your own corporate data center is one thing; however, when you’re dealing with cloud and SaaS applications, you cannot control those locations since those environments are hosted by other companies.

Businesses relying on mission-critical cloud/SaaS applications are completely at the mercy of local Internet providers and the congested conditions of the public Internet infrastructure.

So, is there a solution besides hoping that the public Internet will be able to support the performance requirements of cloud-based and SaaS applications? In local regional environments, this may not be the case, but it certainly will become an issue once you traverse oceans and continents.

Latency and packet loss: Public Enemy #1 for applications
The public Internet is prone to high latency and packet loss, which results in poor application performance. Over long distances, high latency can result in employees having to wait several minutes to refresh their screens for business-critical and time-sensitive cloud and SaaS applications.

Latency fluctuations

Latency Fluctuations on Internet from Boston to Shanghai

In addition, packet loss ranging from 10-15% over the Internet is not abnormal between branch offices located in San Jose and China. This results in data having to be sent through the network over and over again. When you add the fact that your data and applications must also traverse a large distance in scenarios like this (latency), employees may have to wait several minutes to refresh their screens.

Packet Loss from AWS Beijing to AWS Virginia

15-60% Packet Loss on Internet from AWS Beijing to AWS Virginia

For anyone attempting to access mission-critical, time-sensitive applications like Salesforce or SAP Business By Design, this lag in wait time makes the application virtually unusable.

A recent interview we had with Forrester Principal Analyst, Andre Kindness, confirmed these dilemmas, especially when it comes to applications like voice and video conferencing:

“A lot of business professionals are doing voice calls over the internet,” he explained, “But in the world of business where you know packets drop and latency can be an issue, when you are having conversations with people in different parts of the globe, trying to understand those customers, partners, or peers with calls dropping packets and having long delays will make a huge dramatic difference on the business and make it very difficult to build relationships.”

Is SD-WAN an answer?
Edge-based SD-WAN may be an answer for local deployments, but it is definitely not the answer for global ones. SD-WAN is not a new connectivity option; it merely leverages already existing connections, such as the public Internet or a hybrid scenario that includes both the public Internet and MPLS links for specific applications.

A new class of connectivity is required.
To solve global application delivery issues you need a global private network that provides the flexibility of the public Internet and the reliability of MPLS. To that end, we designed Aryaka’s network to be the only global SD-WAN with WAN so you can successfully deliver any application, anywhere in the world. We not only provide secure access to data and applications from the corporate data center, but also to any cloud and SaaS environment.

Global enterprises access Aryaka’s Managed SD-WAN through their local Internet and connect to one of our points of presence (PoPs) around the world. This enables them to be up and running on Aryaka’s private WAN within hours or days, compared to the months it takes for an MPLS deployment.

global_mpls

The network is also layered with WAN Optimization, which helps increase throughput over the network and accelerates applications no matter where they reside and from where their end users access them.

How fast can applications be deployed and accelerated on Aryaka?
One of the major benefits of using Aryaka’s global SD-WAN is that deployment and application performance are dramatically accelerated. We can look to one of one of our customers for an example:

Recently, JAS Forwarding Worldwide, one of the global leaders in freight forwarding and logistics, started using Aryaka to speed up performance and improve the quality of their Zoom video conferencing service for executives and employees around the world.

Using Zoom over the Internet provided challenges for JAS. Their legacy Internet-based network failed to meet quality expectations and resulted in frequent disconnects during video conferencing calls.

Once Aryaka was deployed for Zoom video conferencing by JAS, they saw almost instantaneous improvement in audio and video quality, as well as delivery, over the video conferencing platform globally.

“All it took was a phone call to Aryaka, and within just minutes, the network was up and ready for all traffic from Zoom. We experienced better-than-MPLS video conferencing quality with Aryaka at a fraction of the cost,” said Mark Baker, CIO of JAS. “Not only did end users stop complaining about voice and video quality issues, the usage of the Zoom platform within JAS started to rise.”

The solution for complete application delivery
IT staff might be able to relate to the amount of applications shown above and the difficulty involved to deliver them to end users worldwide with optimum performance. We sympathize with the task they have at hand, which is why we have designed our global SD-WAN solution to resolve these issues in a matter of hours with simplicity never seen before on any network.

We invite you to speak with us or our customers about how Aryaka’s global SD-WAN can help you address global connectivity needs. You can also get started with a proof of concept today.

The post Why MPLS and Cloud Applications Don’t Mix appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/mpls-cloud-applications-dont-mix/feed/ 0
How Latency, Packet Loss, and Distance Kill Application Performance https://www.aryaka.com/blog/latency-packet-loss-distance-kill-application-performance/ https://www.aryaka.com/blog/latency-packet-loss-distance-kill-application-performance/#respond Sat, 31 Oct 2020 15:48:22 +0000 https://www.aryaka.com/?p=18993 Latency, packet loss, distance, and application performance. What do all these terms have to do with each other? If you manage IT networks for a global enterprise, it’s important to step back and look at big picture, so you can more clearly see how they all impact one another. This may sound like “Networking 101” […]

The post How Latency, Packet Loss, and Distance Kill Application Performance appeared first on Aryaka.

]]>
Latency, packet loss, distance, and application performance. What do all these terms have to do with each other?

If you manage IT networks for a global enterprise, it’s important to step back and look at big picture, so you can more clearly see how they all impact one another.

This may sound like “Networking 101” to some of you, but it’s critical to understand the relationships between these terms and their combined impact on application performance.

Definitions:

  • (Network) Latency is an expression of how much time it takes for a packet of data to get from one designated point to another.
  • Packet loss is the failure of one or more transmitted packets (could be data, voice or video) to arrive at their destination.
  • Distance is the intervening space between two points or, for the sake of enterprise networks- two offices.
  • TCP (Transmission Control Protocol) is a standard that defines how to establish and maintain a network conversation via which application programs can exchange data.

The Big picture:

When there is distance between the origin server and the user accessing that server, to complete a task the user needs a reliable network to connect. This network may be a private network, like a point-to-point link or MPLS. It may also be public, typically over the Internet. If the network has packet loss, the overall throughput between the server and the user significantly reduces with increasing distance. This means that the further away the user is from the origin server, the more unusable a network becomes.

Why is that?

The main culprit is TCP (Transmission Control Protocol), the standard that defines how to establish and maintain a network conversation via which application programs exchange data.

TCP is the protocol or mechanism that provides reliable, ordered and error-checked delivery of data between servers and users across a network. TCP is a good guy and helps with data quality. It’s also a connection-oriented protocol, which means a data communication mode in which you must first establish a connection with a remote host or server before any data can be sent.

The next step after the establishment of a TCP connection is to establish flow control to determine how fast the sender can send data and how reliably the receiver can receive this data. Depending on the quality of the network, the flow will be determined by window sizes negotiated from both ends. The ends may disagree if the client and the server view the network’s characteristics differently.

flow control for TCP connection
This has a major impact on application performance!

Certain applications like FTP would use a single flow and scale to the maximum available window size to complete the operation. However windows-based applications tend to be more ‘chatty’ and need multiple back and forth to get the operation(s) completed.

The simplistic model to consider:

Network + Packet Loss + High Latency = Application Performance for TCP Applications.

In fact, looking at the graphic on the maximum throughput one can achieve, you wonder how organizations get any collaboration across long distances at all.

Maximum TCP Throughput with Increasing Network Distance
Maximum TCP Throughput with Increasing Network Distance

Voice and Video perform poorly when there is packet loss, especially over long-distance Internet links. However, even minimal packet loss combined with latency and jitter will make a network unusable for real-time traffic. Why? Because these applications run over UDP (User Datagram Protocol).

Unlike TCP, the good guy who polices all interaction, UDP couldn’t care less. UDP is connectionless with no handshaking prior to an operation, and exposes any unreliability of the underlying network to the user. There is no guarantee of delivery.

Here is the path most organizations with a global user base and growing application performance issues tend to take.

  1. Focus on Internet links. Buy more bandwidth. Throughput typically increases somewhat but not enough to fix the issue.
  2. Upgrade to MPLS links. Wait for 6-9 months for deployment. Realize that the problem has not been solved for long-distance connections.
  3. Consume more and more and more bandwidth. Deploy QoS to deal with congestion and its impact on real-time traffic. Voice and Video do okay, assuming enough bandwidth is configured.
  4. Realize that you can’t afford to keep buying more bandwidth at this alarming rate.
  5. Add WAN Optimization appliances. With TCP optimization, compression of data and application proxies, it does address the issues of throughput.
  6. See prices skyrocket to manage and maintain WAN Optimization hardware, and then experience sticker shock when it’s time to refresh those appliances.
  7. Consider your options. Cloud Services? Mobility?
  8. Revisit your entire enterprise network design. Vow to transform that network. Plan for the Cloud and for Mobility. Account for Big Data and your growing needs. Accommodate acquisitions and business changes.

And how would you do that? If you know that the status quo is broken, you also know that the traditional hardware vendors are trying to squeeze every last red cent out of those boxes before their business model becomes completely outdated.

Aryaka is the world’s first and only global, private, optimized, secure and Managed SD-WAN as a service that delivers simplicity and agility to address all enterprise connectivity and application performance needs. Aryaka eliminates the need for WAN optimization appliances, MPLS and CDNs, delivering optimized connectivity and application acceleration as a fully managed service with a lower-TCO and quick deployment model.

We invite you to learn more by contacting us today, or download our latest data sheet on our core solution for global enterprises.

The post How Latency, Packet Loss, and Distance Kill Application Performance appeared first on Aryaka.

]]>
https://www.aryaka.com/blog/latency-packet-loss-distance-kill-application-performance/feed/ 0