Organisational Maturity Assessment
Assess your organisation's maturity across key dimensions.

People

Select the statement that most describes your current view of your organisation's maturity.

Team Alignment

Teams work in silos with limited understanding of shared outcomes. This creates organizational friction and rework. No value stream alignment is present, and Conway's Law acts against delivery efficiency.

Collaboration occurs only in urgent cases (e.g. outages or fire drills). Some joint efforts emerge, but without defined roles or goals. Dependencies are high, slowing delivery. Domain alignment is not formalized.

Teams are increasingly stream-aligned and KPIs are tied to shared product outcomes. Boundaries between delivery and operations are reduced. Team Topologies concepts like enabling teams and team APIs may be emerging.

Independent Value Streams (IVS) are formed: teams are domain-aligned, empowered, and accountable for outcomes. Conway’s Law is leveraged intentionally. Coordination overhead is low due to loose coupling. Teams own the full product lifecycle with autonomy and shared incentives.

Ownership Clarity

Legacy systems have no maintainers; documentation is outdated or missing. Critical incidents are hard to assign or resolve. Organizational accountability is minimal, and service boundaries are undefined.

Responsibility is claimed informally. Documentation may exist but is out of sync with reality. Ownership may be duplicated or missing in key areas, leading to missed SLAs or delayed issue triage.

Ownership is mapped at the service/domain level using RACI or similar models. Stream-aligned teams begin taking long-term responsibility for their systems. Incident responses improve as handoffs reduce.

Ownership is embedded into observability dashboards, alerting systems, GitHub org structures, and service catalogs. Teams monitor and revise ownership continuously as architecture evolves. Tooling supports self-serve clarity and accountability.

Skill Readiness

A few experts are bottlenecks. Work queues pile up around them, and domain knowledge is not shared or documented. Teams defer instead of learning. Workflows are manual and fragile.

Modern tools are available but underutilized. Confidence in CI/CD, IaC, or observability varies widely. Learning is self-directed and inconsistent. This results in uneven velocity and quality.

Core platform capabilities are used across teams. Training is routine. Teams begin developing a shared vocabulary and reuse patterns. Platform teams provide enablement, not just infrastructure.

Skill development is deliberate and organization-wide. Architecture Modernisation Enabling Teams (AMET) coach teams. Engineers, PMs, and architects all speak a shared language of modernization and can substitute for each other in delivery contexts.

Process

Select the statement that most describes your current view of your organisation's maturity.

Reuse

Each new initiative requires rebuilding from scratch. Solutions are bespoke, undocumented, and inconsistent. Architectural debt grows as duplication across systems is unchecked.

Scattered libraries and pipelines are reused informally. Teams reimplement similar capabilities due to lack of visibility or confidence in existing artifacts. Tooling fragmentation is common.

Templates, libraries, and blueprints are centrally curated and adopted across domains. Domain architects or platform teams maintain components as shared assets. Discoverability improves through internal documentation or service catalogs.

Reuse is incentivized and tracked. Modular, versioned pipelines are maintained as products by platform teams. High adoption across domains is driven by automation and internal platforms (e.g. IDPs). Duplication is the exception, not the norm.

Governance

Security and compliance are enforced inconsistently or only during audits. Teams operate with little guidance on how to handle sensitive data or changes to production systems.

Some policies are defined, but they are stored in static documentation and rarely consulted. Teams develop informal workarounds, and violations often go undetected until post-incident reviews.

Governance models are integrated into delivery workflows. Tooling enforces approval steps, policy adherence, and access control. Security, risk, and compliance processes are visible and operationalized.

Policy-as-code and automated compliance checks are embedded in CI/CD pipelines. Real-time controls and logs are used for proactive governance. DevSecOps is practiced across all streams.

Feedback Loops

Teams lack visibility into how decisions affect outcomes. Retrospectives are rare or superficial. There is no structured mechanism to learn from delivery or incidents.

Retrospective reviews occur after major outages or delivery failures. Feedback is captured but not always acted upon. Learnings are informal and not reused across teams.

Process performance and delivery impact are measured. Feedback loops from retros, metrics, and experiments are used to refine product backlogs or architectural priorities.

Telemetry and change data feed directly into team backlogs, auto-scaling rules, or incident mitigation playbooks. Impact data drives continuous learning and system orchestration decisions.

Technology

Select the statement that most describes your current view of your organisation's maturity.

Integration

Different systems operate in silos and rely on manual exports or batch processes for data sharing. There is no interoperability or standard data sharing protocol.

Ad hoc integrations have emerged using scripts or APIs, but they are brittle and poorly documented. Failures are common and monitoring is minimal.

Domain-based systems communicate using standard APIs or Kafka-like event streams. This enables real-time data flow and decoupling across components.

Event-driven architecture with domain-defined contracts enables reliable, scalable, and real-time integration. System boundaries are well defined and support asynchronous communication patterns.

Automation

Approvals, deployments, and ETL processes are triggered manually. Human error is common and audit trails are weak or missing.

Workflows may be partially scripted, but manual intervention is still required. Pipelines break often and are not resilient to failures.

Most workflows are orchestrated via automation platforms. Monitoring and alerting are built-in and teams actively reduce manual dependencies.

Fully automated systems adapt dynamically using telemetry. Auto-scaling, ML-driven triggers, and auto-remediation are standard practices across critical services.

Observability

Failures are detected through user complaints or escalations. There is no unified view of system health or performance.

Teams have added logs and uptime dashboards. Alerts are configured for critical errors, but diagnostics are slow and fragmented.

Observability is centralized. Dashboards, metrics, logs, and traces are connected across infrastructure, application, and data components.

Observability is streaming and real-time, integrated with alerting, service-level objectives, and root cause analysis tooling. SLAs are proactively monitored.

Outcomes

Select the statement that most describes your current view of your organisation's maturity.

Business Value

Business outcomes are not measured, and teams operate without clear performance expectations or feedback loops to validate their efforts.

Teams report metrics, but there is little connection to business value creation. Reporting is periodic and often reactive.

Metrics are tied to cost savings and performance improvements. Dashboards show how investments in reuse and reliability reduce total cost of ownership.

Teams track and optimize business impact using continuous feedback. Initiatives are prioritized based on value delivery forecasts and historical ROI.

Data Trust

Multiple reports show different numbers. Data freshness and accuracy are not guaranteed. Confidence in data-driven decisions is low.

Efforts have been made to standardize some KPIs, but inconsistencies across departments and tools remain. Trust in data is fragmented.

Organizations have adopted shared metrics definitions and platforms. Lineage and provenance are tracked to ensure auditability and trust.

Data is validated, versioned, and scored for trustworthiness in real-time. Dashboards reflect unified metrics layers. Governance policies are enforced automatically.

Predictability

Work delivery and performance vary greatly across teams. Issues are addressed only when urgent. No proactive alerting or forecasting is available.

Teams rely on static reporting. Incidents trigger alerts, but underlying trends or risks are not flagged in advance. Forecasting is manual or non-existent.

Telemetry includes trend-based alerts and performance thresholds. Historical data is used to identify upcoming risks or degradation.

Real-time and historical data is used to simulate future outcomes. Scenario modeling helps teams plan capacity, costs, and reliability proactively.

Maturity Results
View your organization's maturity across key dimensions.
Get Recommendations
Enter your contact information to receive personalized recommendations.