20 May 2021 ~ 7 min read

Part I - Implementing an API Gateway using Envoy


We'll be using Envoy to implement an API gateway as a single entry point to our upstream services which will sit behind it. By default Envoy's settings are tailored for the service mesh use case. For more information on edge proxy best practices see these docs.

So why bother with an API gateway? We're building a microservice architecture burger shop, and we identify some challenges with granular services. Let's look at a simple use case.

From a client-side perspective, the more granular an API is the more work it can be for a client to interact with e.g. multiple requests (round trips) to get all the relevant information needed for a page such as user info, orders list, order details, basket, etc.

Gateways can help solve some of these problems where you have highly segregated APIs. What we need here is a consistent API we can interact with which has suitable responses. We want to authenticate and get a list of orders and basket content from a single HTTP request.

From a backend perspective, we want a way to off load cross-cutting concerns such as security, resiliency and authentication to the gateway instead of implementing those in each service. We also like the idea of decoupling the public facing API endpoints from our internal services so we can make changes to services without impacting the public API.

We'll go over some of the benefits of an API gateway and take a look at an implementation in which we will sit a gateway in front of some existing web API services and give a client (for example a web SPA) a single entry point to our services. In a later post, we will add more functionality to the gateway by using Envoy filters to handle some common concerns such as authentication and header enrichment.

Gateway

Benefits of an API Gateway

Services that sit behind the gateway are simpler since they have fewer concerns which the gateway now is responsible for. This means we can solve some common concerns consistently and from a single entry point. Below are some concerns and features of a gateway to solve them.

Concern Feature
Security Single entry point which acts a barrier to our API endpoints. Authentication (also token validation) and authorization.
Resiliency Retries configured consistently rather than in every service/technology/language.
Observability Monitoring and metrics to help debug and scale. Logging.
Performance Response caching, less load on services.
Control Setup multiple gateways for different client needs (e.g, mobile, unauthenticated user with rate limits, different access levels and authentication flows). Internal services can use different protocols which might not be web friendly.
Composition Response transformation to give tailor made responses. Backends for Frontends (BFFs).

Drawbacks

While there are many benefits it's always worth mentioning some drawbacks:

  • Single entry point means a single point of failure and since everything is going through the API, it also could face degradation issues which could impact the reliability of your application. But we can set up a cluster of API gateways with requests load balanced across them to be more resilient in a high availability environment.
  • It's another component to deploy, but separation of concerns often negates this drawback.

Envoy setup

Let's set up an Envoy instance in Docker and have our application communicate with it. This is a simple first step. We're basically directing the gateway to route requests to our services so we don't need to expose them. This is the basis for future work we want to do such as have the gateway handle authentication and response aggregation. There's a lot more to Envoy that we will show here so check it out for yourself.

You can have a look at a full length hand written envoy.yaml.

Now let's break down some key parts of the file here.

Listeners and Filters

The YAML below, in a nutshell, is saying we want our gateway to expose two routes (basket and orders) and for it to be accessible through the via /api/basket and /api/order respectively. For this to function correctly, we need to tell the gateway about our upstream services too, so it knows how to map the two together. The upstream services might be running on a specific route depending on how it was developed. It might have a different naming convention or contain version information etc. So we need to match our given route to the implemented route in our service. In a later post we will add filters into the mix to show the power of a gateway.

But first some terminology, so we can understand the configuration.

Listener

This tells Envoy to bind to a port, in this case 10000.

Filters

A listener has filters. There are 3 types of filters: listener, network and HTTP which operate on different network layers (e.g. L4 and L7). Filters are hierarchical by their network layer e.g. L4 listeners will happen before L7. Filters perform some operation on the request and/or response. You can use existing filters by Envoy and also use the Lua scripting language to so some simple logic within a filter. In this example we use the envoy.filters.network.http_connection_manager filter. This filter is important because it lets us proxy HTTP requests (it's also an L7 HTTP listener).

Virtual host

Definition:

The top level element in the routing configuration is a virtual host. Each virtual host has a logical name as well as a set of domains that get routed to it based on the incoming request’s host header. This allows a single listener to service multiple top level domain path trees. Once a virtual host is selected based on the domain, the routes are processed in order to see which upstream cluster to route to or whether to perform a redirect.

Routes

A route configuration will trigger on match criteria and then route to the specified backend service e.g.

---
- name: basket
  match:
    prefix: "/api/basket"
  route:
    prefix_rewrite: "/api/basket"
    cluster: burgers.basket.api

A route blocks refer to a cluster which we will define next.

Full listener example

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 0.0.0.0, port_value: 10000 }
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                codec_type: auto
                stat_prefix: index_http
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: burgers_backend
                      domains: ["*"]
                      routes:
                        - name: basket
                          match:
                            prefix: "/api/basket"
                          route:
                            prefix_rewrite: "/api/basket"
                            cluster: burgers.basket.api
                        - name: orders
                          match:
                            prefix: "/api/orders"
                          route:
                            prefix_rewrite: "/api/orders"
                            cluster: burgers.ordering.api
                upgrade_configs:
                  - upgrade_type: websocket
                http_filters:
                  - name: envoy.filters.http.router
                    typed_config:

Clusters

A cluster is a collection of IP address (and port) that are the backend for a service. Every service needs its own cluster. Here we have two clusters, one for the Orders API and another for the Basket API.

We're configuring Envoy to act as a 'front proxy' and will be using load balanced endpoints. We're using a round -robin load balancer policy and connection timeout of 0.25s some of these values might not be suitable for production, so we should, in the future, allow these values to be configurable. We can achieve that by substituting the relevant values with environment variables.

clusters:
  - name: burgers.basket.api
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: burgers.basket.api
      endpoints:
        - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    { address: burgers.basket.api, port_value: 80 }
  - name: burgers.ordering.api
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: burgers.ordering.api
      endpoints:
        - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    { address: burgers.ordering.api, port_value: 80 }
Info Check out my burger shop repository @8895a59d7e for a concrete example which shows how its setup in Docker compose.

Next up

See Part 2 - Integrating Keycloak OIDC with our Envoy API Gateway

We will look at integrating an identity service with our gateway using Keycloak + OIDC. We will modify this gateway to handle authentication and decode our JWT token to pass on user information to upstream services. To do this we will be using the following Envoy filters: JWT and OAuth.


Headshot of Jason Watson

Hi, I'm Jason. I'm a software engineer and architect. You can follow me on Twitter, see some of my work on GitHub, or read more about me on my website.