Platform engineering. Introduction

Platform engineering. Introduction

We're going to lightspeed!

Play this article

This blog post aimed to make an introduction to Platform engineering, who needs it and how the Platform team is different from other dev teams. This blog post is based on the Team topologies book and its authors, Matthew Skelton and Manuel Pais

Team types

Platform engineering is a rapidly evolving field that aims to provide the best tools and practices for developing, deploying and operating software applications. Platform engineers are responsible for designing, building and maintaining the platforms that enable developers to focus on their core business logic and deliver value to customers faster and more reliably.

One of the key concepts in platform engineering is team topologies, which organise teams and their interactions based on the type and frequency of communication they need. Team topologies define four fundamental team types: stream-aligned teams, platform teams, enabling teams and complicated-subsystem teams. Each team type has a different purpose and mode of collaboration, and they can be combined to form different organizational structures. Henny Portman did excellent visualization of team types in his blog post (Figure 1).

Platform team: a team that works on the underlying platform supporting stream-aligned teams in delivery. The platform simplifies otherwise complex technology and reduces the cognitive load for teams that use it.

Figure 1. Team types, according to Team toplogies

Cloud-native aspect

Another important aspect of platform engineering is cloud native computing, an approach to building and running applications that exploits the advantages of the cloud computing model. Cloud-native applications are designed to be scalable, resilient, observable and portable across different environments. The Cloud Native Computing Foundation (CNCF) is an open-source foundation that hosts and supports many technologies and projects that enable cloud-native computing, such as Kubernetes, Prometheus, Envoy, Istio, Open Policy Agent, OpenTelemetry, Jaeger, Helm and others.

Figure 2 visualises the complexity of the current production-ready reference architecture.

Figure 1. Reference cloud-native architecture

Figure 2. Reference cloud-native architecture

Platform engineering configures and operates sidecar proxy and infrastructure services, helping stream-aligned teams focus on developing core business logic.

Cloud-native is the software approach of building, deploying, and managing modern applications in cloud computing environments.
Source: AWS

Reference architecture deep dive

Figure 2 shows how to use sidecar proxies and platform infrastructure. A sidecar proxy is a small container that runs alongside a primary container. The sidecar proxy can add functionality to the main container, such as authentication, authorization, monitoring, etc.

The platform is a pre-configured Kubernetes resource that can quickly and easily deploy business logic. The platform includes a sidecar proxy configured to use REST API.

At the bottom of Figure 2, you can find an example of preconfigured infrastructure services integration. This part of the figure illustrates how the Platform team can implement database integration with a message broker via an outbox pattern.

The outbox pattern is a technique for implementing reliable messaging between services. It consists of two main steps: first, the service writes any messages it needs to send to a local database table (the outbox); second, a separate process (the outbox publisher) reads the messages from the outbox and sends them to the message broker.

The benefits of using the outbox pattern are:

  • It ensures that messages are not lost or duplicated, even if the service or the message broker fails.

  • It decouples the service from the message broker, making it easier to test and maintain.

  • It avoids blocking the service while waiting for message delivery acknowledgements.

A platform simplifies the implementation of the outbox pattern. It provides:

  • A configuration that integrates with popular databases (such as PostgreSQL, MySQL, MongoDB, etc.) allows you to easily write messages to the outbox table using a fluent API.

  • A service that monitors the outbox tables and publishes the messages to the message broker (such as Kafka, RabbitMQ, etc.) using various strategies (such as polling, triggers, etc.).

  • A dashboard that lets you monitor and manage the outbox tables and publishers.

The business logic part of a Platform is compatible with different programming languages (such as Java, Python, Node.js, etc.) and messaging formats (such as JSON, Avro, Protobuf, etc.). It also supports transaction models (such as 2PC, saga, etc.) and message ordering guarantees (such as FIFO, causal, etc.).

The following are some of the benefits of using sidecar proxies and platforms:

Quick and easy deployment: The platform includes pre-configured Kubernetes resources that can be used to quickly and easily deploy and operate your custom business logic.

Flexible: The sample platform can be used to deploy business logic on a variety of platforms, including Kubernetes, AWS, Azure and others.

Secure: The sidecar proxy can be used to add security features to the main container, such as authentication and authorization, without changing the business logic code.

Scalable: The sample platform can be scaled to meet the needs of your application.

In this blog post, we will explore some of the benefits and challenges of platform engineering, how team topologies can help optimize the flow of work and feedback, and how cloud-native technologies can enhance the performance and reliability of software applications. We will also share some best practices and resources to learn about platform engineering.