Reflections and receivers¶
flowchart LR
env[ambient light field] --> surf[scattering surface]
surf --> ch1[channel 1]
surf --> ch2[channel 2]
surf --> chN[channel N]
ch1 --> rec[receiver]
- stigmergy in daily life — passive carriers in the wild
- brain memory mgmt — the receiver picks signal
- universe as compression — channels are compression
- godding — the verb that sharpens channels
Investigation · rating: medium. One thread: passive surfaces sample the field.
Status: partial | 2026-05-07 | rating: medium Compress levels: L0 ↓ L1 ↓ L2
L0 — TL;DR (≤5 lines)¶
A tilted mirror, a disco ball, a soap bubble, and a spider's silk thread are all the same kind of object: passive scattering surfaces that re-route the surrounding light field into a small number of viewable channels. Any one of them carries enormous information — most of the scene, projected. Any one viewer extracts a tiny slice — one 2D cross-section of the 4D plenoptic function. The variance comes from combos: many surfaces × many receivers × cheap decoders. Truth hangs on a single thread because the channel is always there; only specific viewer-geometry-and-time combinations open it.
L1 — Overview¶
Core question¶
How much information is encoded in passive optical artefacts (mirrors, bubbles, foam, threads), how much does any one receiver actually extract, and what combos amplify the extracted variance from a single environment?
Why it matters¶
- The plenoptic field of any room — the full description of light at every position and direction — contains far more information than any single observer extracts. Most of it is unused.
- Combining cheap passive surfaces with cheap algorithmic decoders is a way to recover that unused information without instrumenting the room.
- This is the optical analogue of stigmergy. The environment carries traces; the question is who can read them. See STIGMERGY-IN-DAILY-LIFE.
- Receiver diversity (different species, different sensors) multiplies extracted information across the same scene without changing the scene at all. This has practical consequences for sensing, biodiversity-as-information, and ecosystem framing.
Mermaid map (L1)¶
flowchart LR
source[Light sources] --> field[Plenoptic field]
field --> mirror[Mirror / tilt]
field --> disco[Disco ball / foam]
field --> bubble[Bubble / thread]
mirror --> human[Human eye]
disco --> human
bubble --> human
field --> other[Bird · bee · IR · radar]
human --> info[Extracted info]
other --> info
click mirror "#a-flat-mirrors-information-capacity" "Mirror capacity"
click disco "#disco-ball-arithmetic" "Disco ball arithmetic"
click bubble "#bubbles-angle-to-colour-transformer" "Bubble film interference"
click other "#receiver-diversity-multiplies-extracted-information" "Receivers"
The substrate (left two boxes) is generous; the receivers (right) are stingy. The work is decoder-development, not channel-discovery — the channels are already free.
Skeleton sub-claims¶
- A single mirror has nearly unlimited information capacity. The bandwidth limit is the mirror's surface flatness vs. light wavelength (≈ λ/2 RMS error). Above that, every angular detail of the scene is preserved.
- A single receiver extracts a tiny slice. The triple (surface geometry, viewer position, time) opens one 2D slice of the 4D light field. Move any one of them slightly and a different scene is revealed.
- Disco balls compress space, not information. N ≈ 200 micro-mirrors don't add information; they spatially demultiplex the source into N replicas. With multiple sources, the speckle pattern is a sparse sample of the light field — recoverable by inversion.
- Bubbles compress angle into wavelength. Thin-film interference encodes (incidence angle, film thickness) as colour. A camera + bubble decodes part of the surrounding light field for free.
- Truth hangs on a thread. A silk thread is invisible 99.99% of the time. A specific solar angle, viewer position, and dust load opens the channel — and the bit revealed (where, when, who, wind) is enormous against priors.
- Receiver diversity multiplies information. Same room, mantis shrimp + bee + human + IR camera together extract more than any subset alone, by an arbitrarily large factor as you add receptor diversity.
- Combos beat individuals. Most of the upside is in (passive surface) × (smart receiver) × (algorithm) combinations — not in any single component being clever.
L2 — Deep dive¶
A flat mirror's information capacity¶
A perfectly flat mirror is an isomorphism: every ray in equals one ray out, with no information lost except the surface reflectivity coefficient. The mirror does not destroy any of the incoming light field; it only reflects it.
The bandwidth limit is set by surface flatness:
- A mirror with RMS surface error σ blurs the wavefront by ≈ 4πσ/λ in phase. For visible light (λ ≈ 500 nm), σ < 50 nm preserves angular resolution to about a milliradian — finer than human acuity.
- A typical bathroom mirror with σ ≈ 1 μm passes ~1 arcmin angular resolution — still well under the eye's limit.
- A disco-ball micro-mirror, a few millimetres on a side, gives ≈ 1 arcmin resolution at 5 m. Eye-limited.
In practical terms, every mirror you can recognise yourself in is an information-preserving reflector. The reason a tilted mirror "shows you the room" is not a metaphor — it is a literal copy of the light field, just rotated by 2θ.
What "tilt" does to the channel¶
Tilting a mirror by Δθ does not generate new information about the scene. It performs a channel-select: it rotates which 2D slice of the 4D plenoptic function P(x, y, θ, φ) reaches the static viewer. The scene is invariant under the tilt; the viewer's slice is not.
This is why a slightly-tilted bathroom mirror reveals what was around the corner — not because the mirror generated information, but because the tilt aimed a different ray at your eye that was always there.
The cost: channel-select is exclusive on a single receiver. You can only extract one 2D slice at a time; each extra slice requires an extra mirror, an extra time sample, or an extra receiver. Combinations of these three — many surfaces × many tilts × many receivers — are how you break the limit.
Truth on a single thread¶
A spider's silk thread is a few micrometres in diameter. Most of the time it is invisible against most backgrounds. But at the right combination of:
- Solar elevation angle (15–30° works best)
- Viewer position within the cone where the thread acts as a rod-shaped diffraction grating
- Background contrast (the dark side of a hedge against bright sky)
- Dust or moisture loading on the thread
…the thread becomes a sharp, often rainbow-coloured line. A single bit ("thread is here") is suddenly accompanied by a fan of inferences:
- Where the spider was (or is)
- When (the angle constrains time-of-day; the dew, time-of-night)
- Wind direction (anchor points, drape, curve)
- Trap geometry (more threads at the same angle reveal the structure)
- Recent activity (broken anchors, prey wrapping)
So the thread carries metadata that was unavailable until the channel opened. The single bit is a key that unlocks a fan of information once the viewer has the priors.
This is the optical version of a more general principle that recurs across investigations:
- Information that requires a specific decoder is latent in the environment all the time.
- The bandwidth of latent information dwarfs the bandwidth of explicit information.
- Most of progress in any field is decoder development — not data generation.
The non-optical analogue is in STIGMERGY-IN-DAILY-LIFE: worn paths, fingerprints on doorknobs, dishwashing residue on plates. The shape is identical — traces are everywhere; reading them is the hard part.
Disco-ball arithmetic¶
A disco ball with N ≈ 200 micro-mirrors and one point source produces N specks in the room. Each speck is a copy of (a small angular region of) the source. With one source, you have:
- N replicas of the same content — redundancy, not new information
- Spatial multiplexing — you can see the source from many positions in the room
With M sources at different angles or distances:
- Each mirror "sees" a different M-mix of the sources, weighted by geometry
- The N×M speckle pattern is a sparse sample of the light field
- A camera tracking the N speck colours over time, with ball rotation, recovers the gross structure of the source distribution
A naive estimate of the channel:
- N=200 specks × C=3 colour channels × T=30 time samples per rotation period ≈ 18 000 measurements per second
- The effective angular resolution is ≈ √N angular bins ≈ a 14×14 sample grid of the room
This is a physical instance of reservoir computing for optics — a chaotic, fixed scrambler that turns a high-dimensional input into a low-dimensional, learnable output. The same arithmetic applies to:
- Foam (many bubbles, each a partial scatterer)
- Snowfields (surface micro-facets)
- Fish scales, beetle elytra, butterfly wings
- A grass lawn at low sun (each blade is a tilted micro-mirror)
- A wet street under headlights at night
None of these surfaces was "designed" for sensing. All of them are passive scramblers that, given a smart-enough decoder, become measurement instruments.
Bubbles: angle-to-colour transformer¶
A soap film of thickness t and refractive index n produces constructive interference for wavelength λ when
2nt cos(θ_r) = (m + ½) λ
where θ_r is the angle inside the film and m is an integer order. The film thickness drains under gravity, so a vertical bubble has thickness varying from ~10 nm at the top (black film, destructive across all visible) to ~1 μm at the bottom (multiple constructive orders, milky).
What the bubble gives you:
- For each (incidence angle, viewer angle) pair there is a wavelength of brightest reflection.
- The bubble's spherical surface is a 2D map — parametrised by sphere coordinates — of those colour-coded angle pairs.
- A photo of the bubble is, in principle, an inversion problem: given the colour map, recover the incident light directions.
The information rate is striking. A 5 cm bubble photographed at 4 megapixel resolution carries ≈ 10⁶ angle-coded samples. Given a model of the bubble's thickness profile (gravity-drained from t₀ with drift velocity v), most of those samples are independent measurements of the surrounding light field. Cheap, transient, and free except for the camera you already had.
A foam — a froth of bubbles — is an even richer scatterer, but its inversion is currently beyond practical algorithms. The signal is there. The decoder isn't built yet. This is a typical state of affairs in the passive-channel space.
Receiver diversity multiplies extracted information¶
Two viewers in the same room extract different subsets of the light field. The extreme cases are illustrative:
| Receiver | Channels | Special properties |
|---|---|---|
| Human | 3 cones (R, G, B) | 30° focal cone, ~1 arcmin acuity, no UV, no polarisation |
| Pigeon | 4 cones (UV, R, G, B) | Tetrachromacy, ~340° field, polarisation-sensitive |
| Honeybee | 3 cones (UV, B, G) | Polarisation, sky compass, sees nectar guides |
| Mantis shrimp | 12–16 receptors | Multiple polarisation axes, hyperspectral |
| Pit viper | IR (~700–14 000 nm) | Thermal, blackbody from 30 °C objects |
| Bat | Echolocation | Time-of-flight, sub-millimetre resolution at 1 m |
| Star-nosed mole | 22-tentacle touch | Densest somatosensory map known |
| RGB phone camera | 3 cones | 12 MP, broadband-ish, non-linear |
| Polarisation filter | 1 channel × N rotations | Surface texture, water-vs-glass discrimination |
| LWIR thermal camera | IR (8–14 μm) | Body heat, recent-touch traces on door handles |
Same room, but mantis shrimp + bee + human together exceed 22 receptor channels. Each receptor projects the light field onto a different basis vector. The combined information extraction is bounded by the union of those bases, not the maximum of any single one.
The practical lesson for human-built systems: instrument with diversity, not redundancy. A scene observed by visible camera + IR camera + polarisation filter + microphone yields uncorrelated channels — far richer than four visible cameras at slightly different positions.
The deeper claim is that ecosystems extract more total information from their environment than monocultures. Niche differentiation is, partly, receiver differentiation — and the total information bandwidth of an ecosystem is roughly the sum of its species' receiver projections. This is one (legitimate) physical interpretation of biodiversity-as-information.
Combos: passive surface × smart receiver × algorithm¶
The richest pickings are not in any single component but in combinations. A non-exhaustive table of well-known and emerging combos:
| Surface | Receiver | Algorithm | What you get |
|---|---|---|---|
| Tilted mirror, moved slightly | Camera | Parallax | Depth from a single viewpoint |
| Disco ball + rotation | Camera | NN inversion | Coarse light-field video |
| Bubble | Hyperspectral camera | Thin-film inversion | Local light-field sample |
| Foam | Camera | (Open) | Eventually: 3D scene reconstruction |
| Spider web at sunrise | Camera + microphone | Vibration analysis | Spider, wind, prey events |
| Window scratches | Camera | Lensless imaging | Recovered scene through scratched window (Torralba & Freeman) |
| Polished floor | Camera | Reflection inversion | Scene-from-reflection (standard CV) |
| Wall edge + shadow | Camera | "Corner camera" | Around-the-corner viewing (Bouman et al.) |
| Eye specular highlight | Photo | Geometry + iris model | Reconstruction of what the subject was looking at |
| Plant leaf glint | Drone camera | Polarisation | Sub-canopy moisture and species |
Each row is an instance of "free passive channel + cheap modern decoder". The modern part is critical — most of these are unworkable without a model that can be trained or analytically derived.
The trend matters: as compute and ML cost drop, the value of passive channels rises proportionally. Surfaces that were "just decoration" become measurement instruments. The disco ball wasn't useful as a sensor in 1980 because nobody could afford the inversion. By 2030 it will be a developer-kit accessory.
Why this matters beyond optics¶
The compression pattern generalises. The shape — generous substrate, stingy receivers, decoder-limited — recurs:
- Stigmergic traces (STIGMERGY-IN-DAILY-LIFE): worn paths, fingerprints on doorknobs, dishwasher residue. Latent info, decoder-limited, receiver-diverse.
- Markets: prices are scalar projections of an enormous information space. Different traders extract different subsets. Strategy diversity = receiver diversity.
- Conversations: a single utterance encodes for multiple receivers (the addressee, eavesdroppers, the speaker's own future memory). Different listeners decode different layers.
- Compressed memory (BRAIN-MEMORY-MANAGEMENT): a single dream-image decodes into many distinct recollections depending on prior context.
In each case, the substrate is generous and the receivers are stingy. Variance comes from adding receivers, adding decoders, and combining surfaces. The strategy is the same whether you are reading reflections or reading people.
Open questions¶
- What's the minimum-viable decoder for "scene from a single soap bubble"? Current research-grade pipelines need dozens of frames + photogrammetry. Could a phone in 2026 do it from a single photo with a generative prior?
- Is there a quantitative measure of receiver diversity that maps to total ecosystem information extraction? The polytope-volume of receptor sensitivity curves looks plausible but hasn't (to our knowledge) been empirically tied to ecosystem-level information rate.
- Plenoptic-function bandwidth has a hard upper bound (diffraction, scattering, absorption, time). What's the practical-room information capacity in bits/sec before any of these limits bite? The number is surprisingly hard to find.
- A "thread atlas" — which threads, at which angles, reveal what — would be a useful folk artefact. Beekeepers and trackers carry pieces of this; nobody has written it down.
- Does the brain implicitly run disco-ball inversion when entering a room? The visual system reconstructs scene structure from a chaos of partial reflections without you noticing. The inversion algorithm, whatever it is, is one of the most efficient programs ever evolved.
- The single-thread-of-truth principle — latent info dominates explicit info — predicts that future tools should compete on decoders, not data. Is the current AI ecosystem already in that regime, or still in the data-acquisition phase?
References¶
To verify before promoting beyond partial:
- Adelson & Bergen (1991), The Plenoptic Function and the Elements of Early Vision — canonical formalisation.
- Torralba & Freeman (2014), Accidental cameras / Accidental pinholes — recovering scenes from window scratches and shadows.
- Bouman et al. (2017), Turning corners into cameras — using shadow edges as imaging devices.
- Hsu (1989) on thin-film interference; standard optics texts (Hecht).
- Cronin & Marshall (1989) on mantis shrimp colour vision.
- Land & Nilsson, Animal Eyes (2012) — receiver diversity catalogue.
- Nishino & Nayar (2004), Eyes for relighting — recovering scenes from a human eye's specular highlight.
Inspiration sources¶
- The user's framing in plain language: tilted mirrors, "truth hangs on a single thread", disco balls, bubbles, foam, different receivers (humans, birds) for variance from the same environment. This page collects those threads into a single frame.
- The recurring shape across investigations: substrate is generous; receivers are stingy; variance comes from combos. This is the optical instance.
- The compression view of UNIVERSE-EVOLUTION-AS-COMPRESSION — the same logic at room scale rather than cosmic scale.
See also¶
- STIGMERGY-IN-DAILY-LIFE — non-optical traces, same channel-decoder pattern.
- UNIVERSE-EVOLUTION-AS-COMPRESSION — the larger compression frame.
- LEARNABLE-SKILLS-FOR-VARIANCE — human-skill version of receiver diversity.
- BRAIN-MEMORY-MANAGEMENT — single-cue → fan-out recollection, the cognitive analogue of single-thread visibility.