CCT Labs: A Reference Lab for Programmable Physics¶
We're a small, rigorous R&D lab building coherent field control infrastructure, with Space as our long-term focus. AI and Biology provide calibration and validation domains.
Coherent field control infrastructure = phase-coherent actuation + synchronized sensing + low-latency feedback + calibration protocols + an energy ledger—implemented with explicit actuator limits (delay and bandwidth/low-pass response) and explicit measurement limits (finite-shot noise and defined averaging)—packaged as a repeatable methodology and reference devices.
Our theoretical foundation is the Continuum Computation Thesis (CCT)—a framework for understanding how physical systems trade bandwidth, coherence, and energy. We develop the theory, run the simulations, and build the hardware that turns coherent control from conjecture into engineering capability.
Near-term, CCT Labs functions first as a reference lab: the immediate output is validated methodology, reference devices, and reproducible benchmarks. Application narratives remain downstream of that validation.
What we're building in 12 months:
- A photonic reference bench (RFH-QF) — reproduce ≥3 discrete response bands (repeatable plateaus in the declared coherence functional as a control parameter is swept) with band edges within ±10% of simulation under declared tolerances + Hybrid MZI displaced-counting→homodyne sweep to calibrate the observer slider and discreteness metrics.
- An RF/EM field-control bench (RFH-PL) — closed-loop stabilization of test masses in field-shaped potential wells with RFH α in [0.9, 1.1] under a declared estimator/noise model, and under declared actuation limits (delay and bandwidth/low-pass response) consistent with the controller, using waveform-shaped control primitives rather than idealized step inputs. Our default primitive is a two-step pre-emphasis + hold waveform with tunable pre-duration; our default discipline is explicit shot budgets (e.g., N=4 vs N=16) and a gated calibration policy (cheap 1-point unless the regime looks pathological, then 2-point).
- A first hardware Prog_T ledger — full subsystem energy ledger + benchmark protocol + baseline comparisons (Lens Tools v1), including latency/bandwidth constraints and finite-shot measurement variance with explicit averaging budgets, cross-condition checks, and calibration triggers.
The Challenge¶
Spaceflight remains expensive because payload fraction falls exponentially with Δv (rocket equation), making missions propellant-limited and hard to reuse. To change this, we need field-based external infrastructure that can reduce reliance on onboard propellant and structural mass.
AI faces a similar physics bottleneck: frontier training is energy- and capex-intensive. Both domains hit the same wall: brute-force control is expensive.
The common issue: we are fighting physics instead of surfing it—spending energy to overpower dynamics instead of steering their native coherence.
Where we fit¶
CCT Labs turns a feedback-based physics framework into a reproducible engineering methodology. We function first as a reference lab for programmable physical media—substrates whose native dynamics can be steered, measured, and benchmarked under declared constraints.
We've developed two operational quantities we use to score substrates:
-
RFH (α): Exponent linking estimation error to measurement bandwidth (defined as information-throughput, e.g., FI/sec or monotone proxy) for a declared estimator, noise model, and bandwidth definition. Coherent regimes approach α≈1; the 'Theorem 8 Floor' of incoherent averaging yields α≈0.5. This value represents the limit of uncoordinated measurement where resolution is capped by back-action-limited noise. CCT Labs uses this as the baseline benchmark: any system realizing α≈0.5 is treated as a passive, 'noisy' substrate. Success in Phase 1–3 requires navigating the system out of this Theorem 8 regime and into a coherent integration regime where α interpolates toward 1.0, signifying that structured driving has successfully suppressed back-action costs. This log-log scaling is called the Bandwidth-Quantization Law (BQL). RFH treats instruments and controllers as finite-bandwidth "compilers" whose limits shape observed discreteness.
-
Prog_T: Intentional steering (from control inputs to task outcome) per joule, under a full energy ledger, closed-loop constraints, and defined time horizon. Coherent regimes typically allow higher Prog_T than incoherent regimes.
Baselines for Prog_T comparisons: - Formation Control: brute-force = direct mechanical actuation (e.g., voice-coil / stage control) or passive drift matched for the same stability target. - VO₂ Phase Switching: thermal = resistive heating to trigger insulator-metal transition at equilibrium.
We will publish the exact coherence functional and estimators in Lens Tools v1.
The Engineering Bet¶
We're building coherent-field-control methodology for high-programmability regimes. We run a strict Validation Loop: pre-registered predictions → simulation → hardware replication (declared tolerances + stop conditions; publish negatives).
Simulation is treated as a control-and-measurement dress rehearsal: candidate controllers must remain effective under declared delay/bandwidth limits and finite-shot noise, and must carry across conditions before they become hardware targets.
Practically, this means we design and pre-register controller families (waveform shape + timing), estimator regimes (shot budgets and holdout/generalization checks), and calibration logic (when to do 1-point vs 2-point) in simulation before we build hardware around them.
These default controller and estimator choices are motivated by the internal toy-world controllability addendum in Appendix C §11.12.
Phase 1-2 results are valuable on their own; later phases are explicitly gated by replication.
One stack, two domains: We're building one measurement-and-control stack that validates both AI and Space applications. Year 1 is de-risking; if it validates, Space is the long game.
The Insight Chain¶
CCT is built on a small number of testable claims. Each builds on the last:
| # | Claim | What It Means | Test |
|---|---|---|---|
| 1 | Observed discreteness can emerge from finite bandwidth | Measurement limits can induce apparent quantization | RFH shows regime-local scaling/bands + observer-slider transition (counting↔phase-sensitive) under pre-registered confounder controls. |
| 2 | Coherence/estimation scaling transfers across domains | The same principles apply from photonics to biology when the constraint class is declared (actuation limits and measurement limits) | RFH + estimator behavior across ≥3 domains under matched constraint classes |
| 3 | Coherence can be increased via structured driving | Systems can be driven from incoherent to coherent regimes | Regime switching replicated in hardware |
| 4 | Coherent driving improves task control per joule | Maximum effect per joule comes from coherent driving | Prog_T(coherent) > Prog_T(thermal) under full energy ledger |
Claims 1-4 are testable in Year 1. We test progressively deeper regimes; later phases are gated by earlier replication.
Exploratory (gated by Phases 1-3): Extreme coherence may perturb effective propagation metrics. Test: blinded, pre-registered ToF/phase residual search after Phases 1-3.
CCT is presented across three epistemic layers (model theorems → engineering regime → ontology); this grant funds Layer 2 (engineering) validation.
Read: cct-philosophical.md (ontology) and cct-scientific.md (scientific).
So far¶
- We've fit RFH exponents across heterogeneous data (LIGO, cameras, radar, ECG, pulsars (exploratory fits; estimator details to be published in Lens Tools), bioelectric regeneration, plus newer pilots in paleomagnetic excursions, economic time series, and a small-source quantum-optics sweep) as exploratory calibration/workflow checks to map regimes and estimator behavior. For bioelectric systems, a 9-level synthetic gap junction sweep yields α = 0.35 ± 0.02 (sub-incoherent regime). Recent pilots add concrete portability checks: an economic aggregation-distortion pipeline yields \(\alpha_{\text{RV}} \approx 0.52\) stably across BTC/ETH (with some tail-based distortions falling below-band), paleomagnetic excursions show positive scaling, and a small-source quantum-optics dataset (DS3) yields mid-band \(\alpha \sim 0.47\text{–}0.53\) under a proxy \(B\) definition with fixed RBW. These primarily validate portability and falsifier hygiene (declared \(B,\Delta\), uncertainty handling, and sensitivity checks).
- We've built simulations of analog "horizon" devices that show discrete stable response bands and a high-gain regime, reaching ≈4.9× response gain at ~88% coherence, where response gain and programmability per joule peak.
- We've validated mode-selective coherent control in lattice simulations ("Cold Melt"), demonstrating regime switching: baseline lattice dynamics (thermal, incoherent) → resonant coherent driving → mode-selective coherent response. This shows ~3× Prog_T advantage (constant-factor gain; scaling class unchanged) and validates the core claim that coherence is programmable via field structure, not a fixed material property.
- We've run pre-registered simulation campaigns (pre-declared hypotheses and stop conditions) that predicted how these devices respond to parameter changes.
The next step is to turn this into validated lab hardware and a general methodology that others can use.
Impact Targets¶
Space — Coherent Field Control at Increasing Depth¶
We pursue space systems via programmable field coherence (measurement + feedback + actuation) that shifts capability from onboard propellant to external infrastructure (beams/fields, timing, sensing, control). The objective is not new fundamental physics; any apparent "gain" is treated as power routing / field focusing relative to baseline configurations, not energy creation.
The Core Bet: Coherent fields give you more control per joule than incoherent energy. We validate this at increasing depth:
| Phase | Level | Experiment | Success Metric |
|---|---|---|---|
| 1 | Field | RF/EM Field-Control Bench | RFH α in [0.9, 1.1], stable field geometry under closed-loop phase control (report knees/bands if present). |
| 2 | Matter | VO₂ Insulator-Metal Transition | Prog_T(coherent) > Prog_T(thermal) |
| 3 | Quantum | YBCO Superconductor Tc Tuning | Prog_T ratio > 1.5× (stretch goal) |
| 4 | Metric | ToF/Phase Anomaly Detection | Reproducible anomaly > 1σ (Year 2+) |
We've already de-risked candidate operating points in simulation. Simulation sweeps identified an optimal operating point (normalized frequency ≈ 0.32, normalized to the device's characteristic resonance/cavity scale as defined in preregistration), producing ≈4.9× signal gain at ≈88% coherence, with a high-fidelity backup at ≈99% coherence. Success means reproducing this predicted gain and coherence in hardware.
Phase 1: Formation Control. We use electromagnetic (RF/EM) standing-wave fields and closed-loop phase control to create field-shaped potential wells that stabilize test masses on a low-friction stage (air-bearing or pendulum). We quantify performance with RFH α (target: [0.9, 1.1]) and a Prog_T energy ledger. EM field shaping is the natural bridge from tabletop validation to macroscopic actuation, because the same synchronization + feedback primitives scale to phased-array and distributed-field architectures. This bench is not only a stabilization demo; it is a pre-declared controller-selection test. CCT succeeds here only if RFH and \(\mathsf{Prog}_T\) would have selected a better controller, sensing budget, or actuation scheme than the baseline design process under matched resource constraints.
Phase 2: Material Control. We extend the same methodology to drive a phase transition (VO₂ insulator-metal) with less energy than thermal equilibrium. This validates CCT at the material level—coherent fields are more efficient than heat.
Phase 3 (Stretch): Quantum Materials. If Phase 2 succeeds, we attempt the same on a superconductor (YBCO), probing whether coherent control extends to quantum phase transitions.
Phase 4 (Year 2+): Metric Exploration. If Phases 1-3 validate CCT methodology, we probe whether extreme coherent control produces detectable ToF (time-of-flight) or phase anomalies—deviations from baseline propagation predictions.
Each phase builds on the last. We don't claim Phase N+1 until Phase N is validated.
Why This Works: Field Geometry as Structure
We've identified specific field configurations that provide structural control at lower energy than mechanical alternatives. The physics is standard (coherent interference, standing waves); the engineering insight is unique in which configurations work and how to stabilize them. Field geometry replaces structural mass.
This isn't exotic physics. Optical tweezers trap particles the same way; we're scaling it to macroscopic formation control and asking whether the same principle extends to effective propagation (Phase 4).
RFH and Prog_T as Engineering Tools
These metrics aren't just validation criteria—they're engineering tools for scale-up:
| Metric | What It Tells You |
|---|---|
| RFH α | "Am I still in the coherent regime?" — if α drops toward 0.5, you're losing coherence |
| Prog_T | "How much control do I get per joule?" — your energy budget for a given mission outcome |
As you scale from lab bench to formation control to larger distances, RFH and Prog_T track whether the physics still holds. They're the gauges that tell you "this will work at scale" or "this caps out here."
For the photonic observer-slider bench, success is treated differently: it is a purpose-built measurement-regime test, and it succeeds only if the record type shifts reproducibly as the observer mode is swept under fixed source conditions and declared confounder controls.
AI — Calibration Domain¶
Coherent control applies to analog computing (thermodynamic co-processors), where physical relaxation dynamics buy compute. The same RFH/Prog_T metrics that score space substrates also score AI substrates—making AI a calibration domain for our methodology.
Year-1 scope (AI): Validate whether a candidate analog substrate supports (a) reproducible RFH scaling, and (b) measurable Prog_T under full energy accounting. If validated, the path is licensing reference devices and methodology to hardware partners, not vertical integration.
Biology — Partner-Led Calibration¶
We provide RFH/Prog_T analysis tools to partner labs studying bioelectric regeneration (planaria, Xenopus, organoids). This is not a core deliverable—math and tools only.
A Note on "Coherence"¶
Throughout, "coherence" means repeatable, phase-consistent response—the same inputs produce statistically consistent outputs across repeated trials. The exact coherence functional will be finalized and published as part of the CCT scientific methodology.
Known Gaps This Grant Resolves (De-risking Deliverables)¶
This is a de-risking program. Four items must be resolved before scaling claims:
- Coherence functional: replace simulation proxy language with a published operational definition and a hardware-measurable equivalent.
- Prog_T realism: report Prog_T with a subsystem energy ledger (actuation / measurement / I/O) and task-relevant latency/stability constraints.
- AI wedge: choose a first benchmark class and define the readout/scoring protocol and baselines in advance (pre-registered), enabling apples-to-apples comparisons.
- RFH standardization: default (\(B :=\)) information-rate and pre-registered discreteness metrics stable across measurement modes.
12-month Track (seeking a $250k grant)¶
This program buys one thing: bench replication of pre-registered predictions. We already have candidate geometries and simulation campaigns; funding converts them into measured hardware with published protocols, tolerance bands, and an energy ledger—turning CCT from "promising" into "reproducible."
The grantee will have an option to invest into the first priced equity round at an agreed percentage, if the results justify scaling into more ambitious hardware and applications.
Turn the CCT framework (including RFH + Prog_T) from a promising theory + simulation stack into a validated methodology plus a reference device.
- A simulation-to-hardware pipeline with at least one pre-registered prediction test completed (taking a pre-registered horizon simulation prediction into real optics).
- Design and partial realization of a photonic reference bench, reproducing predicted band structure and quantized-filter behavior in real optics (within declared tolerances), in collaboration with photonics/quantum optics facilities.
- RF/EM Field-Control Bench (Space Track): A tabletop demonstrator using RF/EM phase-synchronized emitters for active coherent field stabilization. Success criteria: (a) stable field-shaped potential geometry under closed-loop phase control (α in [0.9, 1.1]), (b) measurable Prog_T under feedback. This validates the core "Formation Control" capability before scaling.
- First programmability-per-joule (Prog_T) measurements in hardware, compared against simulation using pre-declared topological/coherence observables (e.g., band counts and coherence metrics) and uncertainty bands.
- VO₂ Coherent Phase Switching (Phase 2): Demonstrate that coherent optical driving triggers the insulator-metal transition with higher Prog_T than thermal equilibrium. Success criteria: Prog_T(coherent) > Prog_T(thermal) by >1.5×. This validates CCT at the material level.
- YBCO Coupling Experiment Design (Phase 3, stretch): If VO₂ succeeds, design the cryogenic experiment for YBCO Tc tuning (months 9-12, contingent on Phase 2 validation).
- A public software toolkit (“Lens Tools”) for RFH/Prog_T analysis and benchmarking on external systems (and datasets)—plus a reference implementation and calibration protocols provided via our lab hardware kits / partner builds (so results stay reproducible, not “works on my machine”).
- Initial analysis on existing bioelectric regeneration datasets in collaboration with biology labs, establishing biology as a third, multi-scale testbed—with no in-house wet-lab build-out.
Collaborations & Enablers¶
- High-intensity theory and simulation work to map promising regimes and prioritize what to validate in hardware next.
- Access to photonics/quantum optics facilities for bench builds and calibration (equipment sharing, not deliverable partnerships).
- Collaborations with developmental biology labs for data access and applying RFH/Prog_T to regeneration datasets—since we provide math + tools only.
Public outputs¶
- A validated methodology and initial reference hardware (TRL ~3–4).
- Open tools and a public evidence base across at least three domains (physics, engineering, biology).
Budget use ($250k)¶
- Photonics bench parts & metrology time — optics, mounts, detectors, measurement access
- Fabrication & fixtures — custom components, air-bearing stage or pendulum setup
- Measurement hardware — phase-locking electronics, feedback controller, DAQ
- Compute — simulation runs, data storage, analysis infrastructure
- Personnel time — researcher salary, experiment execution, analysis
- Pre-registration, publication & open-source — protocol documentation, Lens Tools release
This 12-month program establishes the reference device and evidence base needed to scale into more ambitious hardware and applications.
Risks & Decision Gates¶
These are explicit criteria for when we should iterate, narrow the claim, or reallocate effort—so we don’t over-interpret early results.
- RFH doesn’t replicate (in a declared regime): If regime-local exponents/bands fail to reproduce in pre-registered tests on new datasets/hardware, we treat that as evidence a specific regime claim is wrong. We tighten the estimator/regime definition and rerun; if it still fails, we drop that claim and publish the negative result.
- Predicted configuration doesn't translate to hardware: If the bench can't reproduce the predicted band structure within declared tolerances, we update the model/bench and iterate. If it remains unreproducible, we classify it as a simulation artifact and move the pipeline to the next candidate geometry/substrate.
- No measurable uplift in early targets: If programmability‑per‑joule measurements and task benchmarks don’t beat strong baselines, we don’t scale the application narrative yet. We narrow to the problem classes where a substrate shows a measurable advantage and keep the rest as longer-horizon.
- Space doesn’t scale under feedback: If measured coherence/band structure and phase/ToF observables fail to improve under increased control bandwidth (or degrade under closed-loop operation), we narrow the space narrative to the regimes where infrastructure demonstrably improves energy routing and control, and keep longer-horizon tracks behind hardware gates.
Roadmap (After the 12‑month program)¶
Months 12–24 — Replication and scaling¶
- Harden the photonic reference bench into a repeatable reference device (repeatability, calibration, documented tolerances).
- Expand programmability-per-joule measurements beyond the first device (multiple substrates/architectures; head-to-head comparisons).
- Grow Lens Tools into a reproducible pipeline with benchmark datasets and protocols others can run—and a certified “reference device + measurement stack” path for teams who want apples-to-apples hardware comparisons.
2–5 years — Application prototypes¶
- AI: Targeted analog co-processor prototypes with measurable energy gains on selected tasks.
- Space: Phase 1 (Field Control) → Phase 2 (VO₂ Material Control) → Phase 3 (YBCO Quantum Control), progressing toward infrastructure-first mission prototypes with partners. Each phase builds on validated results from the prior phase.
- Bio: Partner-lab pre-registered studies in morphogenesis/regeneration using the same scoring lens.
Output: A pipeline that moves from theory → simulation → reference devices → application prototypes.
Why CCT Labs (Counterfactual Value)¶
What's missing without CCT? - No unifying framework: coherence phenomena are studied in silos (photonics, superconductors, biology) without shared metrics or theory - No cross-domain transfer: insights from LIGO don't inform bioelectric research; analog computing doesn't connect to formation control - No clear "north star": no way to know if a discovery in one domain applies elsewhere
What's missing without CCT Labs? - Methodology stays academic: RFH and Prog_T remain journal metrics, not engineering tools - No hardware validation: simulations don't prove anything until bench-replicated - No reference devices: each group builds from scratch instead of using validated specifications - Space application stays speculative: coherent field control for formation/propulsion remains a paper concept
What CCT Labs provides: - A single measurement stack (RFH, Prog_T) that works across domains - A validation loop that turns theory into hardware - Reference devices and methodologies others can license or replicate - A bridge from "interesting physics" to "space capability"
Why Now, and Why a Lab?¶
Why now
- We already have candidate geometries, metrics, and pre-registered simulation predictions; the bottleneck is bench replication.
- Photonics tooling and measurement infrastructure are now accessible enough to run tight replication loops quickly.
- Energy/control constraints are becoming first-order in AI and space, making bits-per-joule advantages strategically valuable.
- For infrastructure-first space systems, a growing part of the bottleneck is metrology and control: precision photonics, timing, and feedback systems are finally strong enough to prototype these approaches on Earth and scale outward.
[The enabling stack for infrastructure-first space systems is already real—DSOC deep-space optical links, power beaming studies, SBSP assessments—so the remaining question is where coherence + feedback yield outsized leverage per joule.]
Why a dedicated lab
- The validation loop breaks if theory, simulation, and bench work are split across organizations: iteration slows and assumptions drift.
- A focused lab can run the loop end-to-end (model → pre-register → build → measure → update) and produce reference devices + tools others can adopt.
- The intent is a Bell‑Labs‑style pipeline: steady principles, reference devices, and spin-outs grounded in one control framework.
Our aspiration is space. If programmable field coherence validates, space is the long-horizon application domain with the highest leverage for this lab. AI and Bio provide calibration, validation, and near-term partnerships; space remains the downstream focus once hardware validates.