In the 60s, automobiles manufactured in Japan consistently beats their competitors in American market. Many refers to the lean manufacturing methodology in the automation as the secret sauce. The software industries borrowed a lot of similar methodologies from TPS (Toyota Production System) into software development industry, which brought about agile software development.

For software to deliver value, it is not just about developing software in agile methodologies. A full SDLC (software development life cycle) includes build, release and upgrades too, some of which are managed in a different department in the organization. DevOps extends agile methodology across departments. DevOps is ultimately about culture but it is the tooling configuration that enables that.

Automation Pipelines

The power horse of the DevOps tooling is automation pipeline (e.g. Jenkins, Azure DevOps, GitHub). These pipelines expedites iterations with frequent feedback about software quality, whether it is common conventional SDLC workflow or more recent infrastructure as code worklfow. For SDLC, the goal is to establish continuous integration (CI) and ultimately continuous deployment (CD).

Serverless Deployment

With serverless deployment, the operation of managing computing resources is abstracted away. Serverless deployment models further simplifies SDLCs and are ideal for some common use cases such as API services, IoT, scheduled and event-driven tasks.


Another creative use of automation pipelines is the data pipelines. Data engineering tasks includes ingestion, ETL, integration, and storage and automation pipelines are ideal automation tools for these tasks.


Observability setup enables instant feedback, an important construct of DevOps. An observability stack consists of metrics collection, log shipping, performance monitoring, request tracing and visualization etc.

More on automation

  • Public Key Infrastructure 3 of 3 – PKI Implementation - After the last two post, now we can focus on PKI implementation. The use case is software testing, where we need to create and recycle a lot of short-lived certificates. Typically, we don't have to create public certificates because testing workload is internal. Also, hosting a public CA is much…
  • Public Key Infrastructure 2 of 3 – Certificate Automation - Following the last post on PKI, we'll discuss automation of certificate issuance. Two key activities to automate are: validation of the requestor and issuance of the certificate. Validation Validation isn't always required. For private CAs, the trust boundary does not go beyond the internal engineering team, there is little incentive…
  • Public Key Infrastructure 1 of 3 – Basics - In 2021, I wrote an intro to Public Key Infrastructure (PKI). Now that I have to host my own certificate authority, I decide to dive a little deeper into PKI in this series of posts. In software testing scenario, we need to issue (and recycle) a lot of certificates, and…
  • Workload Identity on Kubernetes 2 of 2 โ€“ EKS - I discussed in my previous post on workload identity and dived into how it works in AKS (Azure Kubernetes Service). In this post I will continue the topic with AWS as the example. From the perspective of CSP, we consider any running process on the cloud resource as workload. Therefore,…
  • Workload Identity on Kubernetes 1 of 2 – AKS - As applications are moved to the cloud, the application workload hosted on virtual machines need to interact with cloud resources. For this, we need an IAM solution with two mechanisms: a (non-human) identity in the cloud service platform (CSP), to represent the application; a way to grant permission to this…

ontact Digi Hunch for Professional Services.