Specification: Telematics Migration into bf-manage-core with 5-Minute Freshness and Health Visibility
TLDR (Solution Summary)
- Keep a clear transition boundary:
bf-telematics owns scheduler + provider ingress adapters; bf-manage-core owns command handling, persistence, and read APIs.
Workspace Poll Schedule + run each workspace on a staggered 5-minute cycle via telematics service + avoid "all vehicles timeout at once" behavior.
Canonical Telematics Snapshot + store one authoritative freshness/value record per vehicle/device in core + eliminate cross-page value mismatch.
Request-Time Health and Issues + compute health/issues in core query flow (no event bus/projections in this phase) + expose discoverable diagnostics.
Stale and Missing Data List + identify devices not updated recently and configured devices with no data + give Accounts a direct outreach/worklist view.
Migration Cutover Plan + route current telematics reads/writes through bf-manage-core with toggle-based rollout + rollback fast without destructive data changes.
1. Summary
Problem
- Sales and Accounts are seeing synchronized timeout behavior, inconsistent values between pages, and poor discoverability when configured devices stop reporting or never report.
- James currently performs manual account follow-up without a reliable stale-device signal.
Goal and Success Criteria
- Achieve predictable telematics updates at a 5-minute service level target.
- Present consistent telematics values across pages from one canonical state.
- Surface actionable health/reporting so support and Accounts can immediately identify non-reporting devices.
What Will Be Built In This Phase
Telematics Service Ingress Layer + keep polling/websocket/MQTT protocol conversion in bf-telematics + hand off canonical update payloads to core.
Core Command + Persistence + normalize/validate payloads and write canonical snapshot/event history in bf-manage-core.
Request-Time Health and Issues + evaluate freshness/status at query time from persisted core data + expose "last update received" and stale/no-data lists.
Migration Cutover Plan + move read ownership to core while keeping legacy polling active during shadow/canary phases.
Scope (In)
- Per-workspace poll cadence control and staggering.
- Telematics-service-to-core handoff contract for canonical updates.
- Freshness semantics and health classification in core read APIs.
- Health visibility for operations and account-facing workflows.
- Toggle-based migration of read and polling ownership.
Scope (Out)
- Replacing all provider connector internals beyond the handoff contract requirements.
- Event bus + projection read-model pipeline for this phase (deferred).
- Detailed long-term persistence optimization work.
Current Baseline (As of March 9, 2026)
bf-manage-core currently consumes telematics data via service calls to a separate telematics service path.
- Telematics polling and provider lifecycle management live outside
bf-manage-core.
- Reported behavior includes synchronized updates/timeouts and inconsistent values across UI pages.
Future Evolution Guardrails
- The model must remain compatible with both interval-based polling and provider push/stream updates.
- Event-bus/projection architecture remains a planned evolution after this transitional phase.
- Health states must remain extensible (for new failure classes and richer diagnostics) without changing user-facing semantics.
2. Users and Use Cases
Primary Personas
- Accounts liaison (James): needs a reliable list of non-reporting devices to coordinate with customer accounts.
- Fleet operations user: needs confidence that telematics values are fresh and consistent across pages.
- Support/implementation user: needs fast discoverability of configured devices that are not receiving data.
High-Level User Stories
- As Accounts, I can see which devices have not updated recently so I can proactively contact relevant accounts.
- As Operations, I see consistent telematics values and freshness timestamps regardless of page.
- As Support, I can detect "configured but no data" devices without manual cross-checking.
Edge Cases and Failure Modes
- Workspace-wide poll failures causing many stale vehicles at once.
- Device configured recently but never receiving first data.
- Partial workspace updates where some vehicles are fresh and others are stale.
- Provider latency spikes creating delayed but eventually successful updates.
3. Conceptual Model Terms and Decisions
Key Terms
| Term |
Definition |
Notes |
| Workspace Poll Cycle |
One scheduled attempt to collect telematics updates for a workspace. |
Target cadence is every 5 minutes with staggered start. |
| Telematics Handoff Contract |
Canonical payload sent from bf-telematics ingress layer into bf-manage-core command handler. |
Boundary between protocol-specific collection and core domain handling. |
| Freshness Age |
Time since the last accepted telematics update for a vehicle/device. |
Derived from canonical "last update received" timestamp. |
| Stale Device |
Configured device whose freshness age exceeds stale threshold. |
Default threshold is 15 minutes (configurable per workspace policy). |
| Configured-No-Data |
Configured device with no accepted update since activation beyond grace period. |
Default grace period is 60 minutes (configurable) and distinct from stale. |
| Canonical Telematics Snapshot |
Single authoritative read model for current telematics values and freshness metadata. |
All pages use this source to avoid inconsistencies. |
| Request-Time Health Evaluation |
Health and issues calculation performed during query execution from persisted core data. |
Used while event bus/projection pipeline is deferred. |
| Telematics Health Report |
Workspace-level view listing freshness state, update lag, and issue reason by device/vehicle. |
Primary operational and accounts-facing artifact (computed on request in this phase). |
Decision Ledger
| ID |
Decision |
Rationale |
Alternatives Rejected |
Implications |
| D-001 |
Set 5-minute freshness as the explicit service-level target. |
Sales indicated reliable 5-minute consistency is high-value. |
Keep mixed sub-minute targets as primary objective. |
Prioritizes predictability and trust over ultra-low latency. |
| D-002 |
Stagger polling by workspace instead of synchronized starts. |
Prevents simultaneous update and timeout waves. |
Single global aligned polling window. |
Requires deterministic workspace offset/jitter policy. |
| D-003 |
Use one canonical snapshot for all UI reads. |
Resolves inconsistent values between pages. |
Keep page/domain-specific derived snapshots. |
Requires read paths to converge before final cutover. |
| D-004 |
Promote "configured-no-data" to a first-class health state. |
Removes "wild goose chase" diagnosis pattern. |
Fold into generic stale state. |
Health report and workflows must distinguish root cause types. |
| D-005 |
Keep scheduler + protocol adapters in bf-telematics during transitional MVP. |
Minimizes migration risk while preserving existing provider integration seams. |
Move all provider collection runtime into core immediately. |
Explicit red-line ownership boundary with canonical handoff into core. |
| D-006 |
Set default stale threshold to 15 minutes. |
Balances detection speed with provider delay tolerance against a 5-minute freshness target. |
10-minute default threshold. |
Lower false-positive risk; threshold remains configurable by workspace policy. |
| D-007 |
Set default configured-no-data grace period to 60 minutes. |
Gives new device configurations a practical initial data window before flagging. |
Shorter immediate flagging windows. |
Faster discoverability than manual checks while avoiding premature alerts. |
| D-008 |
Defer event bus + projection pipeline in this phase. |
Event bus path is not delivery-ready for immediate cutover. |
Block migration until full evented architecture is complete. |
Health/issues must be calculated at query time initially. |
| D-009 |
Compute health/issues at request time in core query flow. |
Delivers customer-facing outcomes now without waiting for projections. |
Keep stale/no-data in separate asynchronous projection-only path. |
Query service reads multiple persisted sources and applies policy on demand. |
| D-010 |
Maintain toggle-based cutover/rollback per workspace. |
Enables shadow, canary, and immediate fallback with low blast radius. |
Big-bang migration cutover. |
Requires clear ownership toggles and parity checks during migration. |
4. Domain Model and Eventstorming (Conceptual)
- Bounded contexts:
- Telematics Ingress (
bf-telematics: polling + websocket/MQTT protocol conversion)
- Telematics Core Processing (
bf-manage-core: command handling, UoW, persistence)
- Telematics Health Query (
bf-manage-core: request-time freshness/issues evaluation)
- Account Follow-up (stale/no-data operational actions)
- Core entities:
- Workspace, Vehicle, Device, Telematics Ingress Payload, Telematics Snapshot, Health Status
- Invariants and business rules:
- Every accepted handoff payload maps to exactly one workspace and device identity.
- Every configured device has exactly one current health state.
- Freshness state is always computed from the canonical last-update timestamp.
- All user-facing pages must read from the canonical snapshot contract.
- A device can only be in one terminal health classification at a time (
healthy, delayed, stale, configured-no-data).
Interaction Flow
flowchart LR
subgraph TS["bf-telematics (left of ownership boundary)"]
Scheduler["EventBridge Workspace Schedule"] --> PollTasks["Provider Polling Tasks"]
WsProvider["Provider WebSocket Stream"] --> WsIngest["Ingress Adapter: Stream Message"]
MqttBroker["MQTT Broker Topic"] --> MqttIngest["Ingress Adapter: MQTT Message"]
PollTasks --> PollIngest["Ingress Adapter: Scheduled Poll"]
PollIngest --> Protocol["Protocol Converter / Canonical Payload Builder"]
WsIngest --> Protocol
MqttIngest --> Protocol
end
Protocol --> Handoff["Handoff: Canonical Update Payload"]
subgraph CORE["bf-manage-core (right of ownership boundary)"]
Handoff --> CmdService["Service Command Handler: Normalize + Validate Update"]
CmdService --> UoW["Unit of Work"]
UoW --> RepoAgg["Repository + Aggregate"]
RepoAgg --> EventStore["Event Store (event_store)"]
RepoAgg --> Snapshot["Canonical Snapshot Storage"]
ReadApi["Read API"] --> QueryService["Service Query Handler"]
QueryService --> HealthCalc["Request-Time Health + Issues Calculator"]
HealthCalc --> EventStore
HealthCalc --> Snapshot
HealthCalc --> DeviceRegistry["Configured Device Registry"]
HealthCalc --> Response["Health / Issues / Snapshot Response"]
ReadApi --> ManagePages["Manage UI Pages"]
ReadApi --> Accounts["Accounts/Support Follow-up View"]
end
Deferred["Deferred in this phase: Outbox Processor + Event Bus + Projections"]
EventStore -.-> Deferred
Control-Plane Flow
flowchart LR
SA["System Admin Telematics Dashboard"] -->|"GET/PUT /api/telematics-mvp/workspaces/{workspace_id}/cutover"| CutoverApi["Workspace Cutover API"]
WS["Workspace Settings Telematics Tab (feature flag: workspace-telematics-mvp-settings; always enabled in DEV)"] -->|"GET/PUT /api/telematics-mvp/workspaces/{workspace_id}/policy"| PolicyApi["Workspace Policy API"]
CutoverApi --> Cmd["TelematicsMvpCommandService"]
PolicyApi --> Cmd
Cmd --> UoW["Unit of Work + Workspace Aggregate"]
UoW --> CutoverState["Workspace cutover state: device_registry_source, poll_owner, read_owner, provider_mode"]
UoW --> PolicyState["Workspace policy + schedule state"]
CutoverState -.->|"controls run_workspace_cycle polling path"| PollCycle["Poll cycle behavior (legacy | both_shadow | core)"]
Event Timeline
timeline
title Workspace Telematics Health Timeline
T-2 Control-plane setup: System Admin sets workspace cutover flags in core
T-1 Policy setup: Workspace Settings tab updates workspace policy (feature-flag gated outside DEV)
T0 Scheduled trigger: EventBridge starts workspace poll cycle in telematics service
T1 Ingress conversion: poll/websocket/MQTT updates are converted to canonical handoff payloads
T2 Core command transaction: UoW writes aggregate changes, event store, and snapshot state
T3 Query-time evaluation: health/issues are computed from persisted core data when requested
T4 Follow-up: Read API serves health/issues/snapshot responses for operations and accounts
T5 Evolution point: event bus/projection path may replace request-time calculation later
Event Dictionary
WorkspacePollCycleTriggered: workspace poll initiated in telematics service | defines cycle boundary | workspaceId, cycleStartedAt | triggers provider collection.
TelematicsIngressPayloadPrepared: canonical handoff payload prepared from poll/stream/topic input | decouples provider protocol from core domain | workspaceId, providerType, deviceId, vehicleId, payloadPreparedAt, normalizedValues | triggers core command ingest.
TelematicsUpdateAccepted: core accepted update payload | refreshes canonical freshness/value truth | workspaceId, deviceId, vehicleId, receivedAt, normalizedValues | triggers snapshot/event persistence.
WorkspacePollIngestCompleted: poll ingest finished for workspace | measures ingestion coverage | workspaceId, cycleEndedAt, resultSummary | supports cycle reliability reporting.
DeviceHealthEvaluatedOnRequest: health classification computed during query | drives status visibility in MVP | workspaceId, deviceId, freshnessAge, healthState, evaluatedAt | returned in API response.
DeviceMarkedConfiguredNoData: configured device has no accepted data beyond grace | surfaces integration issue quickly | workspaceId, deviceId, configuredAt, graceExceededAt | returned in issues response.
5. Requirements and Constraints
Functional Requirements
- FR-001: The system must execute a workspace poll cycle on a 5-minute target cadence for every active workspace via telematics-service scheduling.
- FR-002: The system must stagger workspace poll start times to prevent synchronized update and timeout behavior.
- FR-003: The system must maintain one canonical telematics snapshot that includes current values and "last update received" per configured device/vehicle.
- FR-004: The system must provide a Telematics Health Report for each workspace that includes device/vehicle health state and last update received timestamp.
- FR-005: The system must classify configured devices into health states including at least
healthy, delayed, stale, and configured-no-data.
- FR-006: The system must provide a stale-device list suitable for Accounts workflows, including reason and recency context for follow-up.
- FR-007: The system must ensure all manage pages that present telematics freshness/value information read from the canonical snapshot/health contract.
- FR-008: The system must expose workspace-level performance indicators for telematics collection (for example cycle success, delay patterns, and recent update coverage).
- FR-009: The system must enforce boundary ownership for transitional MVP: telematics service handles polling/ingress conversion, and
bf-manage-core handles command persistence and read APIs.
- FR-010: Until event-bus projections are available, the system must calculate health/issues at request time from persisted core data and device configuration.
Non-Functional Requirements
- NFR-001: Freshness reliability target: at least 95% of active configured devices should have an update age within 5 minutes during normal provider availability windows.
- NFR-002: Consistency target: pages reading telematics freshness/value state should converge on the same canonical values within one refresh cycle.
- NFR-003: Observability target: each workspace poll cycle must emit traceable lifecycle records across telematics service and core ingest (start, completion, outcome summary).
- NFR-004: Discoverability target: configured-no-data devices must appear in health reporting no later than one evaluation cycle after grace-period breach.
- NFR-005: Request-time query performance for health/issues must remain within acceptable UI latency budgets for first-wave workspace sizes.
Constraints and Assumptions
- EventBridge is the intended scheduler/orchestration mechanism for telematics-service poll cycles and reconciliation sweeps.
- Providers may be integrated as poll, websocket stream, or MQTT topic ingress in telematics service while sharing one canonical handoff contract into core.
- Event bus, outbox processor fan-out, and projection read models are intentionally deferred in this phase.
- Providers may return partial or delayed data; health logic must distinguish delayed vs no-data conditions.
- Workspace-specific policy values (stale threshold, no-data grace period) are configurable with defaults of 15 minutes and 60 minutes.
- Migration must preserve business continuity while moving ownership into
bf-manage-core.
Build Item Coverage Mapping
| Build Item |
Requirement Coverage |
| Telematics Service Ingress + Polling |
FR-001, FR-002, FR-009, NFR-001 |
| Core Command + Canonical Snapshot Persistence |
FR-003, FR-007, FR-009, NFR-002 |
| Request-Time Health and Issues Calculator |
FR-004, FR-005, FR-006, FR-008, FR-010, NFR-004, NFR-005 |
| Migration Cutover Plan |
FR-009, FR-010, NFR-001, NFR-003 |
Verification Notes
- FR-001/FR-002/NFR-001: verify schedule execution distribution and freshness-age percentile reporting by workspace.
- FR-003/FR-007/NFR-002: verify cross-page value/freshness parity against the canonical snapshot.
- FR-004/FR-005/FR-006/NFR-004: verify request-time health-state transitions and stale/no-data surfacing behavior.
- FR-008/NFR-003: verify performance indicators and end-to-end poll-cycle trace completeness across telematics service and core.
- FR-009/FR-010/NFR-005: verify boundary responsibilities and request-time query latency under first-wave workspace load.
6. Interaction and Flow
- Journey overview:
- EventBridge triggers a per-workspace cycle in telematics service.
- Telematics service ingests poll/websocket/MQTT updates and converts them to canonical handoff payloads.
- Core command flow validates and persists canonical snapshot/event history.
- Read API requests compute freshness/health/issues at request time from persisted core data and device configuration.
- Manage pages and Accounts workflows consume one core contract for telematics values and health.
Sequence Diagram
sequenceDiagram
participant SA as System Admin Telematics Dashboard
participant WS as Workspace Settings Telematics Tab
box Telematics Service (left of ownership boundary)
participant EB as EventBridge Scheduler
participant TS as Poll Scheduler + Polling Tasks
participant IA as Ingress Adapters (Poll/WebSocket/MQTT)
participant HO as Core Handoff Client
end
participant RB as Ownership Boundary (red line)
box bf-manage-core (right of ownership boundary)
participant CMD as Command Handler
participant UOW as Unit of Work
participant RA as Repository + Aggregate
participant ES as Event Store
participant SNAP as Canonical Snapshot Store
participant API as Read API
participant QS as Query Service
participant CALC as Health/Issues Calculator (request-time)
participant REG as Configured Device Registry
end
participant UI as Manage UI + Accounts/Support
SA->>API: GET/PUT workspace cutover flags
API->>CMD: Persist workspace cutover settings
CMD->>UOW: Save cutover to workspace aggregate
WS->>API: GET/PUT workspace telematics policy
API->>CMD: Persist workspace policy settings
CMD->>UOW: Save policy + update schedule
EB->>TS: Trigger workspace poll cycle
TS->>IA: Run provider poll / receive stream or MQTT message
IA-->>TS: Canonical update payload(s)
TS->>HO: Prepare handoff batch
HO->>RB: Handoff canonical payload batch
RB->>CMD: Submit payload batch
CMD->>UOW: Begin transaction
UOW->>RA: Load + mutate aggregate
RA-->>UOW: Domain events
UOW->>ES: Append event history
UOW->>SNAP: Upsert canonical snapshot
UOW-->>CMD: Commit
Note over CMD,SNAP: Event bus/outbox projections are deferred in this MVP phase.
UI->>API: Request health / issues / snapshots
API->>QS: Execute query use-case
QS->>CALC: Evaluate freshness and issue states
CALC->>ES: Read ingest/event history
CALC->>SNAP: Read latest snapshot values
CALC->>REG: Read configured devices
CALC-->>QS: Health/issues/snapshot DTOs
QS-->>API: Response DTO
API-->>UI: API response
Note over API,UI: Health and issues are calculated at request time.
Note over CMD: workspace poll_owner flag governs legacy/shadow/core polling behavior for run_workspace_cycle.
7. Non-Technical Implementation Approach
- Approach overview:
- Define and align canonical freshness/health semantics first.
- Keep polling/protocol conversion in telematics service and formalize a canonical handoff contract into core.
- Run migration in controlled phases with parallel validation of consistency, freshness, and query-time health outcomes.
- Cut over UI and account workflows to canonical report/snapshot outputs.
- Delivery sequencing:
- Phase 1: Define health taxonomy, policy defaults, and telematics-service-to-core handoff contract.
- Phase 2: Implement telematics-service polling/ingress conversion and core ingest command persistence.
- Phase 3: Expose request-time health/issues/snapshot queries for Accounts and Support.
- Phase 4: Switch all telematics consumers to canonical core contract and retire direct legacy read path.
- Phase 5 (post-MVP): Introduce event bus/projection pipeline to replace request-time calculation where beneficial.
- Cutover control model (per workspace):
device_registry_source: legacy_sync | core
poll_owner: legacy | both_shadow | core
read_owner: legacy | core
provider_mode: legacy_provider | depot_sim | mixed
- UI rollout controls:
- Workspace Settings
Telematics tab is gated by feature flag workspace-telematics-mvp-settings.
- In local development (
import.meta.env.DEV), that tab is enabled regardless of flag state for implementation/testing.
- Migration cutover flags are managed in System Admin Telematics dashboard; Workspace Settings is reserved for public-facing policy controls.
- Rollback model:
- Immediate rollback is done by setting
read_owner=legacy and poll_owner=legacy.
- Legacy service remains running during migration/soak; core data retained for diagnostics.
- Dependencies and prerequisites:
- Product/Sales sign-off on stale and no-data policy thresholds.
- Accounts workflow agreement for outreach handling and ownership.
- Operational dashboards for cycle reliability, freshness indicators, and query-latency monitoring.
8. Open Questions
- OQ-003: Should stale/no-data alerts be passive report-only in phase 1, or include active notifications?
- OQ-004: Which exact manage pages are in first-wave scope for canonical snapshot cutover?
- OQ-005: What query-latency threshold should trigger moving health/issues from request-time calculation to projected read models?
- OQ-006: What criteria mark readiness to introduce event bus + projection infrastructure post-MVP?
9. Appendices
- Source feedback incorporated:
- Vehicles update and timeout in sync today.
- Reliable updates every 5 minutes are considered high-value.
- Values are inconsistent between pages.
- Health reporting and "last update received" visibility are needed.
- Accounts workflow needs a direct list of devices not updated recently.
- Configured devices with no incoming data must be discoverable quickly.
9.1 Current-State Risk Summary (Aligned to Slides)
- UI currently consumes telematics through two pipelines (
direct telematics service and via core), which increases cross-page inconsistency risk.
- Shared legacy polling responsibilities create correlated failure patterns (update/timeout waves) and larger blast radius per cycle fault.
- Split ownership across services fragments observability and makes canary/rollback controls harder without explicit cutover toggles.
- IPT-71: no-data alert does not identify impacted vehicles.
- IPT-66: polling/refresh rate makes data feel not live.
- IPT-58: stale/unknown SoC handling is inconsistent.
- IPT-50: unknown SoC investigation for customer fleet.
- IPT-46: SoC mismatch across Chargers/Power/live views.
9.3 Design Review and Spec Walkthrough Checklist
- Scope to review:
- Bounded context responsibilities in the core telematics module.
- Event flow from scheduler trigger to canonical snapshot and health evaluation.
- Cutover toggles, rollout phases, and rollback path.
- Questions to resolve:
- Confirm where legacy provider
x_client.py and x_polling_task.py patterns are reused vs replaced.
- How the request-time health/issues path transitions to event bus/projections post-MVP.
- What deviations require immediate spec updates before implementation continues.
- Expected outputs:
- Approved architecture direction and MVP scope boundaries.
- Explicit list of spec deltas with owners.
- Next-step worklist for migration stream execution.
10. MVP Coverage and DDD Alignment
MVP Satisfies (Transitional Design)
Telematics Service Ingress + Polling:
- Satisfies FR-001, FR-002, FR-009 at MVP level by running per-workspace schedule entries and protocol-specific ingress in
bf-telematics.
Core Command + Canonical Snapshot:
- Satisfies FR-003, FR-007, FR-009 at MVP level by accepting canonical handoff payloads and persisting one workspace/device snapshot contract in core.
Request-Time Health Report + Issues:
- Satisfies FR-004, FR-005, FR-006, FR-008, FR-010 at MVP level by calculating freshness/health/issues on query from persisted core data and device configuration.
Cutover Controls and Rollback:
- Satisfies FR-009, FR-010, NFR-001, NFR-003 through workspace-level ownership toggles and reversible rollout phases.
How the MVP Delivers It
- EventBridge triggers per-workspace cycles in telematics service.
- Provider polling/stream/topic inputs are converted into one canonical handoff payload.
- Core command flow validates payloads and writes event history + canonical snapshot through UoW/repository boundaries.
- Core query flow calculates health/issues at request time from event store, snapshot state, and configured device metadata.
- Event bus/outbox projection fan-out is deferred to a post-MVP phase.
Brief DDD Alignment Note
- Aligned:
- A clear bounded-context boundary exists between protocol ingress (
bf-telematics) and domain command/query handling (bf-manage-core).
- Command/query concerns are split (
TelematicsMvpCommandService, TelematicsMvpQueryService).
- Command writes flow through an explicit workspace unit-of-work + repository boundary.
- Query-time policy evaluation is centralized in core query use cases rather than spread across UI pages.
- Partial (next increments):
- No projection read-model/event-bus pipeline in this phase; health/issues are computed on demand.
- Query-time calculation may require optimization thresholds before broader workspace rollout.
- The command/query layer is class-based in this module;
docs/DDD/agent.md prefers function-style use cases, so this can be flattened later without behavior change.
Provider Polling Pattern Reuse Decision
- Reused from legacy service:
- Keep provider integration seam as
x_client.py + x_polling_task.py in telematics service so provider API concerns stay isolated.
- Transitional boundary decision:
- Telematics service remains the primary polling/protocol-conversion owner; core primarily receives canonical payloads.
- Core runtime supports optional compatibility polling paths gated by workspace
poll_owner and provider task availability, but default rollout keeps poll_owner=legacy.
- Evolution note:
- When event bus/projections are introduced, provider seams remain unchanged; only downstream core processing shifts.