For ProData Skills (Agency) | Client KongHQ
Reading Time: 6 minutesIntroduction
APIs have become pivotal in the information economy, enabling millions of applications to talk to each other across the Internet and other networks. This in turn has created the need for the API Gateway, a management tool that aggregates APIs and mediates requests between clients and back-end services. API Gateways consolidate inbound API request traffic, handle load balancing between server resources, and route information requests between services and servers at microsecond speed and efficiency.
So APIs and API gateways are indispensable — but because they are specifically enabled for incoming resource requests, they are also natural points of vulnerability. What better way to gain entry to a premises than through an open door, right? Accordingly, companies need to develop an API security architecture which is sophisticated enough to dynamically differentiate between legitimate information requests and actual security threats and incursions.
This article will provide insights into the fundamental concepts and technologies that make possible such a secure API Gateway architecture. Finally it will introduce the concept of the Zero Trust Model, through which a secured API Gateway is able to provide the balance of openness and security that the emerging information economy demands.
Why API Security?
Today’s application environment typically hosts interactions between thousands of distributed applications and hybrid services, straddling cloud services, servers, mobile devices, and line-of-business applications, as well as the ever-growing presence of remote workers. Of every request for information, the question must be posed ‘Is this request from a legitimate employee, a casual customer, a contractor, or a hacker?’ Correctly answering it is where authorisation, authentication and threat detection is crucial.
Add to the mix the prevalence of SaaS, PaaS, and IaaS (Software, Platform and Infrastructure as a Service, respectively), which are now commonplace — think Google Apps, SalesForce, and Amazon Web Services. For many organizations they’re critical resources, but they rely on, and generate, large volumes of API requests, thereby providing another attack vector for the exploitation of APIs.
In an earlier Internet age, security was mainly handled through static access control, which corralled the important resources behind a firewall to keep the bad guys out, with only privileged visitors given access either on-premises or through VPN. But this approach is too simplistic for today’s distributed, global software ecosystems.
Meanwhile the sophistication of threat actors has, regrettably, evolved to exploit these trends in software development. Tactics such as DDoS, SQL Injection, and the use of surveillance and hacking tools, whether in the hands of ‘script kiddies’, state-sponsored hackers or simple opportunists — all of them pose threats that must be identified and neutralized. But while APIs are now recognised as a major attack vendor for malicious activity, the older-style ACL (Access Control List) approach to defense is insufficient because such single-point controls enable intruders to move between back-end systems once they’re through the gateway.
This is not just theory — there have been numerous large-scale hacks caused by insecure APIs, including
- Personal data of 2 million customers exposed by T-Mobile in 2018
- Exposure of data belonging to 50 millions users by FaceBook in 2018
- The Microsoft Exchange Server hacks, discovered in May 2021
…to mention only a few. This is why an API-centric approach to security is obligatory, as we will explore further below.
Best Practices for Deploying a Secure API Gateway
There are steps that can be taken to enhance security for an existing API Gateway.
Use HTTPS for all communications even where authentication is not required. This might entail using separate SSL certificates for different services within an application suite.
Privileged content should always be protected by secure API authentication methods using authentication plug-ins, for example (more on this below).
Regex (regular expression) checks are used to validate input and sniff out dubious or suspicious entries or patterns of data in API requests.
Rate limiting prevents excessive API requests from overwhelming the systems and is also a defense against the repetitive patterns typical of DoS attacks. Throttling is a form of rate limiting implemented by reducing bandwidth or terminating a session in the event of overload. Size limiting presents another option for managing excessive API requests.
Minimize object information disclosure by ensuring that only the requested information is returned in response to API requests. Consider splitting API gateways by function e.g. for mobile devices, web browser requests, and IoT requests.
Maintain service and resource isolation, such that access to one resource doesn’t automatically provide access to a linked service. And links between services and resources should ideally always be secured by SSL/HTTPS.
Finally, in terms of general best practice, regularly audit and monitor APIs to ensure they’re current, up-to-date and that they meet security requirements. (This ought to be a KPI!)
Components of an API Security Model
On a high level, API security can be described in terms of three components:
- Authentication — is the requestor/use who they say they are?
- Authorization — do they have the required credentials?
- Threat Prevention — if the request fails these criteria that it is properly terminated and logged for analysis.
These security requirements are handled by the API Gateway architecture through a number of interlocking technologies. Robust authentication and authorisation are implemented via a choice of standards including JSON Web Tokens, LDAP authentication, OAUTH 2.0, API Key Authentication, OpenID Connect and Okta, the specific choice being determined by the integration requirements for a given landscape. Authentication plugins are applied to service entities, and are mapped against the upstream services they represent, meaning that the authentication is only validated for those upstream services directly. This enables very fine-grained permission controls on an object or service basis.
Web Application Firewalls provide another level of security. WAFs use rules to filter and monitor HTTP traffic, and are commonly paired with port monitoring practices to guard against attacks launched through port scanning techniques.
Now we come to a key concept in understanding API Gateway security, namely the relationship of the control and data planes. This structure provides a ‘separation of powers’ analogous to the relationship of the executive and the judiciary. The control plane is the domain of policy configuration, providing the high-level rules for determining how network packets should be accepted and if so how they are to be routed. By handling the policy role, this frees up the data plane to concentrate exclusively on request processing. Put another way, the control plane sets the rules which the data plane then executes, thereby providing both management oversight and processing efficiency.
Monitoring API Security Using SIEM
“SIEM” stands for security information and event management and as such provides a natural locus for the analysis of API-related traffic. Through API request logging and analysis, SIEM aggregates data from, for example, anti-virus software, WAFs, event logs, and other sources. This allows the pooling of all the requisite data in a single place and scanned for anomalies or threats through pattern analysis.
SOAR is an emerging security approach, comprising security orchestration, security automation and security response. SOAR makes extensive use of playbooks which orchestrate and automate security event detection and response. It builds on many of the principles of SIEM but provides more sophisticated tools and capabilities and some AI capabilities.
Both SIEM and SOAR approaches are going to be around for a long while to come, but require considerable sophistication in configuration and deployment, as well as continuous monitoring.
Understanding the Zero Trust Model for Secure API Gateways
In light of all the above, the question might come to mind, isn’t there an easier way to deal with all of this? It is in answer to this question that the Zero Trust Model has been devised.
The principle behind Zero Trust is simple: as trust is something easy to exploit, then it should never be assumed. Accordingly the Zero Trust model operates on the Request level, not on the level of Personal ID. It assigns an identity to every service instance for each request. So rather than access being granted on a system or server level, it is negotiated one object or service instance at a time; services are closed by default and can only be accessed by provision of the appropriate credentials.
Zero Trust makes use of mLTS (Mutual Transport Level Security). By validating the private keys of both parties to a transaction, mTLS dynamically validates the ID of the clients at each end of a connection. Additional verification is provided by the information contained in their separate TLS certificates.
Implementation of the Zero Trust model is where the concept of the service mesh comes into play. The service mesh is a dedicated control-plane application which handles communications between services or microservices via proxy. This greatly automates and simplifies the implementation of the Zero Trust architecture, as depicted by the following graphic:
Conclusion
As we have seen, APIs and the API Gateway are set to remain a central feature of digital commerce now and for the foreseeable future. On the one hand, APIs are essential to the ability to transact through the Internet, but on the other, they provide obvious attack vectors for threat actors.
SIEM and SOAR suites provide solutions that can be tailored to API security, but the Zero Trust Model, when combined with a service mesh, offers an easier-to-implement solution that is specific to the security requirements of API Gateways.