Raising the Bar for Insurance Innovation with Dynamic CI/CD Pipelines

Home / Resources / Case Study / Raising the Bar for Insurance Innovation with Dynamic CI/CD Pipelines
Case study - Raising the Bar for Insurance Innovation with Dynamic CI/CD Pipelines

About the Client

A world-leading provider of reinsurance and insurance was launching a new business offering property and casualty insurance in a white-label B2B2C manner. We supported the transition from concept to commercialisation by enhancing the digital platform.

Challenge: Integrating a Legacy Java Stack with DevOps

Responsiveness and stability were core requirements for this insurance start-up. To facilitate continual adaptation without compromising performance, the parent company determined that the business would be hosted on Amazon Web Services (AWS), adopting cloud-native and DevOps principles from the outset.

While this was a new implementation on a greenfield platform, it needed to leverage as much functionality as possible from Fadata’s insurance process platform, INSIS. This Java/Oracle stack is a central component, handling distribution, underwriting, marketing and claims. Combining it with DevOps principles of automation, flow, safety and security was a highly complex undertaking.

To overcome these challenges, we were enlisted as an external partner to design and implement Continuous Integration / Continuous Delivery (CI/CD) pipelines for the platform team and five development squads. The work encompassed business-led data changes, business-led workflow and UI changes, developer-led schema changes and developer-led extensions.

In addition to the technical challenges, there was a time pressure to contend with – the business had to be operational within a 12-month timeframe. This was no mean feat considering the strict regulatory environment.

Furthermore, there were cultural factors to consider. The third-party teams deployed to develop INSIS had never previously worked on a fully cloud-native platform using DevOps principles.

Solution: A Robust, Highly-engineered CI/CD Pipeline

Our brief was to devise an architecture hosted on AWS using CI/CD to ensure software delivery was fast, stable and sustainable.

At the outset, we produced a high-level diagram proposing how both code and configuration could be pushed through the system in a safe, reliable, and repeatable way. Then we suggested practical measures to enhance and optimise the development cycle.

This was quickly developed into a proof of concept using automation, then deployed into a test environment verifying that it worked as intended.

Throughout the process, our engineers carefully selected the most appropriate AWS and open-source tools to optimise software delivery capability. This was a critical factor, enabling the business to respond quickly to market demands without compromising product integrity or platform stability.

Key elements of the implementation include:

Multi-account AWS structure

Allowing developers greater autonomy, while providing guardrails to minimise the blast radius of any performance and security issues, is central to the DevOps ethos. The implementation facilitates this via the AWS multi-account model. It has dedicated accounts for ten specific environments and tasks, including Shared Services, Development Environments, Production Environments and Datalake Services. This provides tighter controls around user access while allowing teams to work flexibly in developing their infrastructure with reduced risk. It also helps to manage the permissions available to pipelines, limiting what can be built where. Grouping business functions and services around AWS accounts also improves the visibility of spend, especially for consumables, such as bandwidth costs.

Jenkins CI/CD

A critical component of the infrastructure is Jenkins, a powerful open-source automation software for CI/CD pipelines. We used it to build out components and individual environments and to manage code, configuration and data generated by developers. Configurations for all jobs are stored within the platform’s 45 code repositories using declarative Jenkinsfiles. This provides version control to build tasks as well as an audit trail for changes to development and deployment processes. It enables developers to control how their pipeline operates without needing administrative approval and allows pipeline changes to be tested through branching structures.

Containers, service discovery and pipelines

Dedicated Amazon Elastic Container Service (ECS) clusters have been built for each environment where front-end website applications can be deployed. Alongside this is a Jenkins pipeline for building and deploying the containers and integrating with Amazon Route53 Service Discovery for dynamic lookups of containers and services.

Compliance and automation

A robust suite of AWS tools has been deployed to monitor and manage compliance of resources and instances of Amazon Elastic Compute Cloud (EC2). These include CloudTrail, AWS Config and AWS EC2 Systems Manager.

The system has been entirely automated, using Hashicorp’s Terraform for the 603 AWS resources per environment and Packer/Ansible for IaaS components. Building it in this way was more time-consuming upfront than using point-and-click processes, but it has underpinned better speed, efficiency and stability for the long term.

Overall, our engineers completed 3,400 commits (with 105,983 lines added and 38,685 removed), 14,900 builds – of which 1,797 were full production rebuilds, and 142 million API calls were reported by CloudTrail and Athena. All of this was achieved within a nine-month window.

Outcome: Strong Foundations Enable Rapid and Reliable Software Delivery

We held an initial pipeline workshop in June 2018. By April 2019, the start-up was a working entity that sold its first insurance policy. By June 2019 it achieved the milestone target of securing its first distribution partner.

This velocity from concept to launch is ground-breaking in a sector that is strictly governed by regulations and where new products are scrutinised for their integrity.

Leveraging the capabilities of the AWS environment with modern, DevOps ways of working played a central role in this achievement. The IT team responded well to the higher levels of autonomy, with Developers and Operations staff empowered and energised by the work environment.

From a technical perspective, noteworthy outcomes include:

  • An ability to build new developer environments, comprising over 600 resources, from scratch in 12 minutes. 
  • Full automation, even for notoriously problematic areas such as systems testing. 
  • Code is produced and tested in short cycles, facilitating frequent product and service updates to satisfy evolving customer demands. 
  • Instances – and whole environments – are treated as cattle, not pets, enabling rapid iteration of application environments. 

This case study is based on work completed by DevOpsGroup before the team joined forces with Sourced Group, an Amdocs company.