Three years ago, someone in a product meeting said the sentence that precedes most accidental platform projects: “we just need to reuse the form components across products.” That sentence launched an -month architectural effort that touched four product teams, reshaped deployment pipelines, and redefined how we thought about shared infrastructure in a regulated banking environment.
This is the honest version of what happened.
The problem we were actually solving
The OneBank initiative at Scotiabank CCAU aimed to give customers a unified digital experience for applying to any of four financial products: loans, credit cards, savings accounts, and insurance. Each product had its own team, its own release cadence, its own regulatory requirements, and its own technical history. What they shared was a customer — and a regulatory obligation to collect essentially the same information in essentially the same way.
The surface-level problem was duplication. Each team had built their own onboarding flow:
Four independent implementations of the same onboarding logic. Each with its own bugs, its own compliance interpretations, its own version of “how we do date validation.” This wasn’t a technical failure — teams had built reasonable solutions to their immediate problems. The failure was organizational: there was no mechanism for shared learning, shared fixes, or shared governance.
The deeper problem, the one that actually needed solving, was that banking onboarding is a regulatory domain, not just a UI problem. Every field is potentially legally significant. Validation rules are compliance requirements, not suggestions. The audit trail isn’t optional. Any platform built to unify this couldn’t just share components — it had to share constraints.
Phase 1: What we tried first, and why it failed
The natural first step was a shared component library. One <Button>, one <Input>, one <Modal>. Shared across all four products. This is what every shared design system article tells you to build, and it’s correct as far as it goes.
Within months:
| Product | Library version |
|---|---|
| Loans | 3.4.1 |
| Cards | 2.8.0 |
| Accounts | 4.1.0 |
| Insurance | 2.7.3 |
Four products, four different versions, none of them current. The Insurance team was two major versions behind because a v3 change broke their custom form orchestration. The Cards team had cherry-picked a fix from v3.2 by copying the source directly — which meant they no longer received future fixes.
This is normal. I’ve yet to see a large shared UI library that doesn’t end up here. The reason isn’t technical debt or laziness. It’s that a shared UI library without a shared validation model doesn’t solve the actual problem. Teams were still responsible for their own validation logic, their own regulatory context, their own audit strategy. The shared library reduced visual duplication, but it didn’t touch the compliance layer — which was where most of the complexity lived.
We had built the wrong shared layer.
The architectural turning point
The insight that changed the design came from a failed debugging session. A compliance validator reported that the Loans and Cards products were interpreting the same regulatory field differently — one treated it as required, the other as conditional based on persona type. Both interpretations were defensible. Neither was canonical.
The root cause: validation logic lived in four places, maintained by four teams, with no shared schema. There was no single definition of what a valid onboarding submission looked like. There were four approximate definitions, aligned enough to pass testing, divergent enough to cause compliance questions.
The solution wasn’t more components. It was a schema layer that existed before the components.
We restructured the entire library into three layers:
P -->|"extends — visual layer only"| Co
Co -->|orchestrates| F
F --> L & C & A & I
L & C & A & I --> BFF
BFF --> API
The key separation: visual evolution and structural contracts have different rates of change, and different downstream impact. Coupling them in a single package forces consumers to treat every design update as a potential breaking change. Separating them let the design system evolve without causing migration work in product teams.
The @onebank/forms layer is where the most significant architecture lives. It owns the validation schema, the regulatory resolver, and the audit log model. It has no React dependency — it’s importable in Node.js, testable without a browser, and independently versionable from the UI layers above it.
Schema-driven form orchestration
The core of the forms layer is SolicitanteSchema, a Zod schema using z.discriminatedUnion on tipoPersona:
const SolicitanteSchema = z.discriminatedUnion('tipoPersona', [
z.object({
tipoPersona: z.literal('NATURAL'),
rut: RutSchema,
nombre: z.string().min(2).max(100),
apellidoPaterno: z.string().min(2).max(100),
fechaNacimiento: FechaNacimientoSchema,
// natural person fields
}),
z.object({
tipoPersona: z.literal('JURIDICA'),
rutEmpresa: RutSchema,
razonSocial: z.string().min(2).max(200),
representanteLegal: RepresentanteLegalSchema,
// legal entity fields
}),
]);
TypeScript narrows the type automatically in conditional branches. A component that renders fechaNacimiento only runs when tipoPersona === 'NATURAL' — and TypeScript enforces this without runtime checks. This eliminated an entire class of bugs: fields rendered for the wrong persona type.
The regulatory configuration function was a more significant architectural decision:
function resolverConfigFormulario(
estado: EstadoFormulario,
contextoRegulatoria: ContextoRegulatoria
): ConfigFormulario {
// Pure function — no hooks, no side effects, no React
const esEmpleadoDependiente = estado.tipoPersona === 'NATURAL'
&& estado.tipoEmpleo === 'DEPENDIENTE';
return {
campos: {
liquidacionSueldo: {
requerido: esEmpleadoDependiente && contextoRegulatoria.productoRequiereComprobante,
visible: esEmpleadoDependiente,
},
// ... other field configurations
},
};
}
This is a pure function. It doesn’t know about React. It doesn’t have side effects. It takes the current form state and regulatory context and returns a configuration object. Business rules are testable in complete isolation:
it('requires salary proof for dependent employee applying for loans', () => {
const config = resolverConfigFormulario(
{ tipoPersona: 'NATURAL', tipoEmpleo: 'DEPENDIENTE' },
{ productoRequiereComprobante: true }
);
expect(config.campos.liquidacionSueldo.requerido).toBe(true);
});
No mocks, no DOM, no React testing library. The test runs in under 2 seconds. This was the prerequisite for fast compliance testing — before this architecture, testing regulatory rules required spinning up a full browser environment.
The BFF layer: hidden complexity
The onboarding flow involves more than a form and a submit button. Behind the BFF:
U->>App: submits onboarding form
App->>Forms: resolveFormConfig(state, regulatoryCtx)
Forms-->>App: FormConfig (validated, field-level constraints)
App->>Forms: SolicitanteSchema.parse(formData)
Forms-->>App: validated data or ZodError
App->>BFF: POST /onboarding/productCode
BFF->>Auth: validateToken + extractClaims
Auth-->>BFF: claims (userId, sessionId)
BFF->>BFF: rateLimitCheck + fraudSignalAppend
BFF->>CB: POST /v2/applications
CB-->>BFF: applicationId + estado PENDIENTE
BFF-->>App: applicationId + nextStep
App->>Forms: logEvento(EventoFormulario)
Forms->>AuditLog: persist(evento, versionFormulario)
AuditLog-->>Forms: confirmed
We underestimated the BFF complexity significantly. The Node.js gateway needed to handle:
Authentication variance: Different products had different session models. The Loans team had implemented their own JWT validation; the Cards team used a shared auth service. Normalizing this at the BFF boundary took longer than the entire @onebank/primitives layer.
Rate limiting at the application boundary: Core Banking had rate limits we didn’t fully document until we hit them in staging. The BFF needed to implement token bucket rate limiting per customer and per product — not global rate limiting, which would have let a spike on one product degrade others.
Request orchestration complexity: Some products required pre-validation calls to external services (DICOM credit checks, CMF regulatory checks) before submitting to Core Banking. These couldn’t be done client-side — they required server-side coordination. The BFF became the orchestration layer for pre-submission validation chains.
The lesson: BFF complexity grows non-linearly with the number of upstream dependencies. We had scoped the BFF as “just authentication and routing.” By the time it was stable in production, it was also handling circuit breaking, request aggregation, error normalization, and downstream retry logic.
Regulatory context as a first-class architectural concern
Chilean financial regulation (SBIF, now CMF after the 2019 merger) imposes specific requirements on what data must be collected, in what order, with what validation. These requirements vary by product category, customer segment, and requested amount thresholds.
We made a mistake early on: treating regulatory requirements as configuration data that product teams would manage. In practice, regulatory configuration requires legal review, not just developer configuration. The distinction matters architecturally — it means regulatory context cannot be freely edited in a product app without a review gate.
The ContextoRegulatoria object in the forms layer became a sealed type — constructed only by a regulatory configuration service with an explicit review workflow, not by arbitrary product code. This was a boundary we added after discovering that a configuration mistake had caused incorrect field requirements in a staging environment.
// Not exported — constructed only by RegulatoryConfigService
type ContextoRegulatoria = {
readonly productoRequiereComprobante: boolean;
readonly montoRequiereDocumentacionAdicional: boolean;
readonly segmentoRequiereVerificacion: SegmentoVerificacion;
readonly _brand: 'RegulatoryConfigService'; // nominal type brand
};
The brand prevents arbitrary construction. A product app cannot create a ContextoRegulatoria — it must request one from the service that owns the regulatory configuration and its review history.
CI/CD: independent builds per package
The monorepo uses Jenkins with independent build pipelines per package. This was a deliberate decision that took three iterations to get right.
Key decisions in this pipeline:
Changesets-based semver: Every PR that modifies a package must include a changeset file declaring the type of change. This eliminates the “who bumped this version” ambiguity and generates accurate changelogs. The automation is strict: a PR without a changeset fails CI if it touches package source files.
Explicit consumer opt-in: When a package publishes, it doesn’t automatically trigger consumer updates. Consumer apps choose when to update. This sounds like it would cause drift — and it does create some lag — but it prevents a library change from forcing an emergency release in a product team’s sprint. Teams update on their own cadence.
95% coverage gate on business logic: The coverage requirement applies specifically to the @onebank/forms business logic — resolverConfigFormulario and validation rules. It doesn’t apply globally. Global coverage requirements on mixed codebases create perverse incentives to write trivial tests for UI code and skip complex tests for business logic. We scope the gate to the layer that matters most.
The observability gap we didn’t see coming
Six months after the platform launched in production, a compliance audit asked: “How often does a customer start the loan application and not complete it, and at which step?” We couldn’t answer that question. We had application submission rates. We didn’t have step-level abandonment data.
The audit log model (EventoFormulario) existed from the start — but it was designed around submitted applications, not in-progress sessions. We retrofitted step-level event tracking nine months after launch, which required a schema migration and a coordination effort across all four product teams.
The hard lesson: observability requirements for regulated software are not optional and they’re not the same as application monitoring requirements. Application monitoring tells you when things fail. Regulatory observability tells you what happened before, during, and after every customer interaction — and it must be retained in a form that’s reconstructable on demand.
We should have designed the EventoFormulario schema with regulatory observability requirements from the start, not general logging requirements. The distinction changes what you record, how you store it, how long you retain it, and who can query it.
Developer experience as an architectural metric
One of the clearest signals that an architecture is healthy: how long does it take a new engineer to ship their first real change?
When we started the platform, the answer was approximately three weeks. A new developer on a product team needed to understand the existing form implementation, the validation approach, the BFF API contract, the deployment pipeline, and the testing infrastructure. None of this was consistent across teams.
After the 3-layer architecture stabilized, we set a specific benchmark: a new developer should be able to produce a compliant, on-brand, correctly validated form page in four hours. This number forced decisions we wouldn’t have made otherwise:
-
The
@onebank/formspackage needed a CLI scaffold command (npx create-onebank-form) that generated a correctly structured form with the right schema imports, the right resolver call, and the right audit log hookup. Without the scaffold, new developers spent hours figuring out the correct pattern by reading existing code. -
The storybook needed full documentation of every form state — not just the happy path, but the regulatory edge cases, the error states, and the loading states. Developers cannot implement something they can’t see.
-
The TypeScript errors needed to be actionable. A generic “Property ‘X’ does not exist” error on a discriminated union is not useful. We added custom error messages to the most common misuse patterns.
The four-hour benchmark also exposed a gap we hadn’t anticipated: the BFF documentation was internal to the platform team. Product team developers couldn’t easily discover the API contract for their product’s onboarding endpoint. We added an OpenAPI spec generated from the BFF’s TypeScript types — which also became the source of truth for mock server generation in integration tests.
What we got wrong
The shared Postgres schema: In the first six months, audit events from all four products wrote to a shared table with a productCode discriminator. This was fine initially and became a problem when the Insurance team needed a significantly different audit schema for their regulatory context. We migrated to product-specific audit schemas at month eight — earlier would have been easier.
The codemod strategy: When we introduced breaking changes in @onebank/primitives, we promised automated codemods that would handle 80% of the migration work. The 80% figure was accurate. But we underestimated the effort required to handle the remaining 20% — which involved cases where teams had extended the primitive APIs in non-standard ways. Document and enforce extension points early; retrofitting codemods for undocumented extension patterns is significantly harder.
The regulatory configuration ownership: We allowed product teams to manage their own regulatory configuration for the first eight months. This felt empowering and reduced platform team bottlenecks. It also resulted in a compliance near-miss where a team had configured a field as optional that was legally required for their product category. The sealed ContextoRegulatoria type came after this incident, not before it.
Missing the regulatory review in the definition of done: Every story that changed form behavior needed a compliance review before going to staging. We didn’t formalize this in the CI/CD pipeline — it was a verbal requirement. Three times in the first year, a change went to staging without a compliance review because the developer didn’t know the requirement applied to their change. We added a CI check that detected form behavior changes and blocked staging promotion until a compliance tag was applied by the team’s compliance owner.
Lessons for enterprise onboarding systems
The compliance layer is a first-class architectural concern, not a configuration layer. In regulated industries, the entities that define what’s valid are not the same entities that write the code. Architecture must encode this separation explicitly — otherwise compliance becomes “something devs configure” and governance becomes “something that happens in code review.”
DX is a reliability metric. Friction in development compounds into bugs in production. If a developer has to read three different example components to understand how to add a form field, they will sometimes get it wrong. The 4-hour benchmark was a forcing function for clarity.
Version governance is as important as version management. Semver tells consumers what changed. Governance tells consumers what they’re allowed to change and when. A shared library without explicit governance decisions — who can approve breaking changes, what lead time is required, how consumer migrations are supported — will drift toward chaos.
Observability requirements in regulated systems are not optional and they’re not the same as general monitoring. Design your audit model before you design your UI. The regulatory requirements are usually clearer than the UX requirements, and they constrain the system more.
Shared platforms require shared ownership structures. The four product teams that consumed the @onebank packages all had representatives in the platform design review process. This wasn’t optional — it was how we caught problems before they became production incidents. The architecture document that nobody reviewed is the one you’ll regret.
Three years in, the platform supports four products and a fifth in development. The migration automation rate is above 80%, the onboarding benchmark is consistently under four hours, and the compliance audit questions are answerable with log queries rather than engineering investigations. The architecture that makes this possible isn’t clever — it’s explicit about what it owns, clear about its boundaries, and honest about its constraints.
That’s usually what good enterprise architecture looks like.