DPL Financial Partners
Case Studies/AI Transformation
Financial ServicesAI Transformation

DPL Financial Partners

AI Agents Across the Full Software Development Lifecycle — Deployed in Under Three Months

DPL Financial Partners wanted to find out how much of their software development lifecycle could be accelerated and improved with purpose-built AI agents. In under three months, Mashbot deployed agents across every phase of the SDLC — from business requirements capture through testing — fundamentally changing how their engineering and product teams worked.

The Challenge

DPL Financial Partners had a capable engineering team and a growing product backlog. What they lacked was leverage. Business analysts were the bottleneck for requirements capture, story creation was inconsistent and time-consuming, and test coverage was uneven — especially for regression. Leadership had seen the promise of AI in development tooling but had no clear picture of where agents would actually deliver value versus where they would add noise.

The question wasn't whether AI could help with software development. It was: which agents, for which tasks, deployed in which sequence, would create the most durable improvement?

DPL needed a partner who understood both enterprise software development and AI agent architecture — and who could move quickly without disrupting active development cycles.

Our Approach

We started where most engagements should: with the work itself. Before designing a single agent, we spent time with the teams doing the work — product managers capturing requirements, business analysts writing user stories, developers navigating context-switching, QA engineers running regression suites. We mapped every handoff, every bottleneck, every place where a skilled person was doing work that followed a pattern.

What we found was a clear sequence of opportunity. The SDLC at DPL had predictable friction at the same points in every sprint: requirements that arrived underspecified, stories that needed multiple revision cycles, prototyping that depended on developer availability, and test coverage that was always a sprint behind.

Phase 1 — Requirements Capture and Needs Analysis

We built a requirements agent that works alongside product managers during stakeholder interviews. The agent listens, identifies ambiguity in real time, suggests clarifying questions, and produces a structured requirements document — tagged by functional area, priority, and dependency — at the end of each session. A separate needs analysis agent cross-references incoming requirements against the existing product backlog and architecture documentation, surfacing conflicts, redundancies, and implicit dependencies before they reach engineering.

The result: requirements that arrived at engineering with dramatically less ambiguity, and a shared vocabulary between business and technical stakeholders that reduced back-and-forth.

Phase 2 — Prototyping

We deployed a prototyping agent that translates approved requirements into interactive wireframe specifications — not visual designs, but structured component trees and interaction flows that engineers can implement directly. The agent draws on DPL's existing design system and component library, ensuring prototypes are grounded in what's already buildable. Product managers can iterate on the spec without pulling a developer off a sprint.

This changed the shape of the handoff between product and engineering. Instead of developers receiving a requirements document and building a mental model of what it should look like, they received a structured spec that answered most of their clarifying questions before they asked them.

Phase 3 — Agile Story Creation

The story creation agent was, in many ways, the highest-frequency intervention. It takes approved requirements and prototype specs and produces INVEST-compliant user stories — complete with acceptance criteria, edge cases, and dependency flags. Stories are structured to DPL's internal template, with effort estimates grounded in historical velocity data from their Jira history.

A review agent then audits each story against a quality rubric before it enters the sprint, catching stories that are too large, acceptance criteria that are untestable, or dependencies that aren't yet resolved. Stories that pass review enter the backlog directly. Stories that fail come back with specific, actionable feedback.

Phase 4 — Development Assistance

Rather than deploying a generic coding assistant, we built development agents tuned to DPL's specific stack, coding standards, and architectural patterns. These agents assist with boilerplate generation, pattern matching against the existing codebase, API integration scaffolding, and code review — surfacing potential issues against DPL's internal standards rather than generic best practices.

We were deliberate about scope: these agents augment developer judgment, they don't replace it. Complex logic, architectural decisions, and security-sensitive code remain fully human-owned. The agents handle the mechanical work that was consuming developer time without requiring developer expertise.

Phase 5 — Test Coverage

The test agents were built last, once we understood the patterns in the codebase well enough to generate meaningful coverage. The agents produce unit tests for new functions based on the acceptance criteria in the corresponding user story, regression tests for modified components based on historical defect patterns, and integration test scaffolding for new API endpoints.

Test agents submit PRs alongside the code they cover. Reviewers see code and tests together, and coverage reports are generated automatically as part of the CI pipeline.

Integration and Governance

All agents are integrated into DPL's existing toolchain — Jira, Confluence, GitHub, and their CI/CD pipeline. There is no separate AI platform for developers to learn. Agents surface in the tools teams already use, at the moment in the workflow where they add value. Every agent action is logged, reviewable, and overridable. DPL's engineering leadership has full visibility into agent activity and the ability to tune or disable any agent independently.

The Results

Deployed in under 90 days — all five agent categories live in production across the full SDLC within three months of engagement start.

Requirements quality score improved 71% — measured by the reduction in engineering questions and revision cycles per requirement, tracked over the first two sprints post-deployment.

Story creation time reduced by 60% — business analysts who previously spent 3–4 hours per sprint on story writing now review and approve agent-generated stories in under an hour, with higher consistency and completeness.

Prototype-to-spec handoff time cut from days to hours — product managers can iterate on specifications independently, without pulling engineering time before a story is ready to build.

Test coverage increased from 54% to 81% — agents generate tests alongside code, making coverage a byproduct of the development process rather than a separate effort that always falls behind.

Developer context-switching reduced measurably — developers report spending significantly less time on requirements clarification and boilerplate, with more time on the judgment-intensive work that requires their expertise.

No disruption to active development — the deployment was sequenced to avoid disrupting in-flight sprints, with agents introduced one phase at a time as the team built confidence in each before moving to the next.

Foundation for continued expansion — the agent infrastructure is now in place to extend into additional workflows: release notes generation, documentation updates, and post-deployment monitoring summaries are already in the pipeline.

Enterprise blueprint being redesigned

Ready to redesign how your enterprise works?

Let's talk about where AI fits into your organization — and where it doesn't yet.