You've been there. A government form with a dropdown. A research database with required fields. A system that needs to put you somewhere. The dropdown gives you choices. None of them are right.
You pick the closest one. The system records it. The record moves through pipelines you'll never see. Somewhere downstream, it produces output — a statistic, an allocation, a policy decision — that looks like it's about you. It isn't. It's about what the dropdown could hold.
When a system forces your identity into a category it wasn't built to hold, it produces output. The output looks real. It has a value in a field. It travels. Other systems use it. But it's been severed from the relations that made the data mean something — the political history, the governance obligations, the kinship, the sovereignty. Those didn't fit in the dropdown.
SRA calls this an artifact: output that looks like knowledge but has been cut from the conditions that made it true. Not an error in the sense of a mistake you can correct by entering the right value. An error in the architecture. The system cannot represent what it needs to represent, so it substitutes something it can. The lie is built into its structure.
The system can't tell the difference between an artifact and a representation. That's the problem. It processes both the same way.
selection ↓
Most data systems share an assumption so deep it rarely gets named: the thing already exists before you put it into the system. You're just capturing it. The question is who gets access.
SRA asks a prior question: what happens to a thing when you move it into a system?
Some things survive. A photograph of a chair is still about a chair. The chair's persistence conditions — the conditions under which it remains what it is — are compatible with photography. But some things don't survive, because they're constituted by their relations. Break the relation and you don't have a piece of the thing. You have something else entirely. The dropdown doesn't give you part of MHA nationhood. It gives you an artifact.
SRA names these persistence conditions: the relational configuration under which a pattern can present a stable interface to a given observer or system. Their absence signals not withdrawal but incommensurability — the pattern has not retreated; the system encountering it has lost the capacity to receive it. What conventional data governance reads as absence or refusal is often incommensurability it cannot distinguish from nothing. Before any decision about access or storage, SRA asks: can this survive becoming data? If yes — under what constraints? If no — what is the correct system behavior?
The answer to the last question is not "store it anyway and add a warning label." The answer is refusal. And refusal, in SRA, is not a failure state. It is a designed outcome. The system working correctly.
Relations with land, water, and more-than-human persons are not an extension of this framework. They are its original ground. The Missouri River was a relative before it was a resource. When the Army Corps enclosed the Garrison Dam in April 1953 and dedicated it that June, the river did not merely change course; it ceased to be what it had been. 156,000 acres of the best bottomland drowned. Cottonwood groves stopped regenerating. The seasonal flood patterns that Maaxiiriwia knew when teaching which plants grew along those banks — patterns that her knowledge was constituted by — were replaced by regulated releases calibrated to downstream hydropower demand. The dam did not ask whether the Missouri River's persistence conditions could survive enclosure. It built. That is the artifact problem at civilizational scale.
SRA at the more-than-human scale is not resource management or ecological data capture. It is relational maintenance infrastructure: the work of sustaining conditions under which both knowledge and the relations that constitute it can remain what they are.
SRA is not a policy. It's an architecture — a set of components that work together to prevent artifact production before it starts.
AI systems are doing this at scale. Every training dataset built from Indigenous-authored content without consent builds the wrong assumption deeper into infrastructure. Every model trained on that data learns to produce artifacts and passes them downstream as representations.
SRA isn't only for Indigenous data — the architecture is general, applicable anywhere persistence conditions matter. But it was developed from the specific: from MHA Nation, from the Missouri River, from Maaxiiriwia's plant knowledge, from what happens when a system can't see what it's destroying.
The general theory is earned from particular ground. The particular ground is not an example of the theory. The theory is an account of the ground.
The question is whether systems get built that can ask the prior question before the dam closes.
Systems that cannot read persistence conditions do not see absence where there is absence. They see absence where there is incommensurability — where the thing is present but has refused the terms of the encounter. The outputs they generate are not representations of nothing. They are ghost outputs: references to a relationship only one party is pretending to have.