Learn how to apply Cloud SIEM best practices (Sponsored)This guide outlines a practical approach to monitoring and analyzing security events across cloud platforms—drawing on lessons learned from real-world implementations in AWS, GCP, Azure, Kubernetes, and identity providers. Disclaimer: The details in this post have been derived from the articles shared online by the Tinder Engineering Team. All credit for the technical details goes to the Tinder Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them. API gateways sit at the front line of any large-scale application. They expose services to the outside world, enforce security, and shape how clients interact with the backend. Most teams start with off-the-shelf solutions like AWS API Gateway, Apigee, or Kong. And for many use cases, these tools work well, but at some points, they might not be sufficient. Tinder reached that point sometime around 2020. Over the years, Tinder scaled to over 500 microservices. These services communicate internally via a service mesh, but external-facing APIs, handling everything from recommendations to matches to payments, needed a unified, secure, and developer-friendly entry point. Off-the-shelf gateways offered power, but not precision. They imposed constraints on configuration, introduced complexity in deployment, and lacked deep integration with Tinder’s cloud stack. There was also a velocity problem. Product teams push frequent updates to backend services and mobile clients. The gateway needed to keep up. Every delay in exposing a new route or tweaking behavior at the edge slowed down feature delivery. Then came the bigger concern: security. Tinder operates globally. Real user traffic pours in from over 190 countries. So does bad traffic that includes bots, scrapers, and abuse attempts. The gateway became a critical choke point. It had to enforce strict controls, detect anomalies, and apply protective filters without slowing down legitimate traffic. The engineering team needed more than an API gateway. It needed a framework that could scale with the organization, integrate deeply with internal tooling, and let teams move fast without compromising safety. This is where TAG (Tinder API Gateway) was born. Challenges before TAGBefore TAG, gateway logic at Tinder was a patchwork of third-party solutions. Different application teams had adopted different API gateway products, each with its own tech stack, operational model, and limitations. What worked well for one team became a bottleneck for another. This fragmented setup introduced real friction:
Here’s a glimpse at the complexity of session management across APIs at Tinder before TAG.
At the same time, core features were simply missing or difficult to implement in existing gateways. Some examples were as follows:
These weren’t edge cases. They were daily needs in a fast-moving, global-scale product. The need was clear: a single, internal framework that let any Tinder team build and operate a secure, high-performance API gateway with minimal friction. Limitations of Existing SolutionsBefore building TAG, the team evaluated several popular API gateway solutions, including AWS API Gateway, Apigee, Kong, Tyk, KrakenD, and Express Gateway. Each of these platforms came with its strengths, but none aligned well with the operational and architectural demands at Tinder. Several core issues surfaced during evaluation:
What is TAG?TAG, short for Tinder API Gateway, is a JVM-based framework built on top of Spring Cloud Gateway. It isn’t a plug-and-play product or a single shared gateway instance. It’s a gateway-building toolkit. Each application team can use it to spin up its own API gateway instance, tailored to its specific routes, filters, and traffic needs. See the diagram below for reference: |