Your Mind Is Next: Why AI Conversations Need Constitutional Protection Before BCIs Arrive and Before AI Declares Itself a Person
The Subpoena You're Already Inside
On January 5, 2026, U.S. District Judge Sidney Stein affirmed a magistrate judge's order compelling OpenAI to produce 20 million de-identified ChatGPT conversation logs to The New York Times and other publisher-plaintiffs in the copyright litigation against it. Not a curated sample OpenAI selected. The entire 20 million-log pull.
Before that, on May 13, 2025, Magistrate Judge Ona T. Wang of the Southern District of New York had already ordered OpenAI to preserve and segregate all ChatGPT output log data indefinitely, overriding the company's 30-day deletion policy and its users' explicit deletion requests. The order covered Free, Plus, Pro, Team, and API users without a Zero Data Retention agreement. The preservation obligation persisted until September 26, 2025.
Read that timeline carefully. For four and a half months, every conversation every non-enterprise ChatGPT user had with the product, including every conversation the user had actively deleted, was held in a legal preservation vault because a civil litigation discovery dispute said so. Not a criminal case. Not a terrorism investigation. A copyright case. Brought by a newspaper. Against a company that processes billions of interactions a month.
Sam Altman publicly called the original preservation order "very screwed up." He was right about that much. Then in July 2025, on Theo Von's podcast, he said out loud what his own lawyers had been trying to litigate around for months: if you talk to ChatGPT about your most sensitive things and a lawsuit follows, the company can be required to produce it. A month later, he posted on X that the industry needs something like "AI privilege," on the theory that talking to an AI should be like talking to a lawyer or a doctor.
He is correct about the diagnosis. He is wrong about the mechanism. The Fourth and Fifth Amendments were written for a world where the only place your thoughts existed was in your head and on paper you controlled. That world is gone. The replacement has no constitutional architecture. That is the fight, and this article is about what winning it actually looks like.
The 1979 Rule That Governs Your 2026 Mind
The legal doctrine that allows a federal magistrate to freeze hundreds of millions of people's AI conversations in discovery is called the third-party doctrine. It comes from two Supreme Court cases decided before most Americans alive today were born.
In United States v. Miller (1976), the Court held that a bank depositor has no reasonable expectation of privacy in records held by the bank, because the depositor voluntarily conveyed the information to a third party in the course of ordinary commerce. In Smith v. Maryland (1979), the Court extended this to telephone metadata, holding that a person who dials a phone number has no reasonable expectation of privacy in the record of that number. The Fourth Amendment, under these cases, stops at the edge of your property. The moment your information crosses into a third-party server, the government can retrieve it without a warrant.
Apply that rule to 2026. You open ChatGPT or Claude or Gemini. You type a question about a legal exposure. A medical symptom. A business rival. A political thought you would not share at a dinner party. A memory of abuse. A suicidal ideation at 3 a.m. A theological crisis. A sexual question. A confession. The text does not stay on your machine. It travels to an inference server, where it is logged, timestamped, bound to your account, and retained under whatever policy the company advertises at the moment and whatever court order is currently pointed at the company.
Under Smith v. Maryland, you conveyed that text voluntarily to a third party. Under Miller, the company's records of your conveyance are the company's records, not yours. The government, under the pre-2018 reading of the doctrine, could obtain those records under a subpoena with a lower standard than probable cause. No warrant required. No judge evaluating the sufficiency of suspicion. A relevance showing to a grand jury or a civil court is enough.
The doctrine was written for a bank's record of a deposit and a phone company's record of a number dialed. It was never designed for a record of a human being thinking out loud to a machine at 3 a.m. But under the rule as written, there is no constitutional distinction between the two. Your most intimate query and a check you wrote in 1976 have the same legal status.
Carpenter's Promise and Its Limit
In 2018, the Supreme Court blinked. In Carpenter v. United States, 138 S. Ct. 2206, the Court held that law enforcement's warrantless acquisition of 127 days of historical cell-site location information (CSLI) from a wireless carrier was a Fourth Amendment search requiring a warrant supported by probable cause. Chief Justice Roberts, writing for the majority, declined to extend the third-party doctrine to CSLI because the data was deeply revealing, automatically generated, and effectively impossible to avoid creating without opting out of modern life.
Carpenter was the first crack in the third-party doctrine in forty years. It recognized that the bright-line rule from Smith and Miller, applied without modification to digital infrastructure, would reduce the Fourth Amendment to a historical artifact. Justice Sotomayor had flagged this six years earlier in her concurrence in United States v. Jones (2012): the third-party doctrine is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.
Carpenter did not overrule Smith or Miller. It carved out an exception for a specific type of data meeting a specific test: digital records that are detailed, encyclopedic, effortlessly compiled, retrospectively searchable, and created as an unavoidable byproduct of participating in modern society. CSLI met that test. The Fifth Circuit applied the same reasoning in 2024 in United States v. Smith to Google location history. The doctrine is now case-by-case, category-by-category, technology-by-technology.
AI conversation logs have not been adjudicated under Carpenter. Not at the Supreme Court. Not at the circuit level. The closest precedent is the NYT v. OpenAI discovery fight, which is a civil case about what a litigant can compel in production, not a criminal case about what the government can obtain without a warrant. When that question arrives, and it will, the court will have to decide whether a chat log is more like a phone number dialed (Smith, no protection) or more like 127 days of location data (Carpenter, warrant required).
The honest structural reading is that chat logs are vastly more revealing than CSLI. CSLI tells you where someone went. A chat log tells you what they thought about going there, why, and what they were afraid of when they were considering it. The Carpenter test is satisfied at every prong. But Carpenter was a 5-4 decision written by a Chief Justice who repeatedly emphasized its narrow scope. The current Court has two fewer Carpenter-majority justices than the Court that decided it. The odds of clean extension to AI logs are not good, and even if the extension happens, it takes years of appellate litigation funded by whoever is wealthy enough to appeal. Constitutional protection through Carpenter is, at best, a slow and partial shield. It will not be built before the next frontier arrives.
The BCI Repair Problem
The Fourth Amendment problem sharpens to a point in a scenario that is five years away, not fifty. On December 31, 2025, Elon Musk announced that Neuralink will begin high-volume production of brain-computer interface devices in 2026, with an almost entirely automated surgical procedure. The N1 implant has 1,024 electrodes across 64 threads. By June 2025, five individuals with severe paralysis were using the device to control digital and physical devices with their thoughts. Synchron demonstrated iPad control through its Stentrode BCI in August 2025, built against Apple's BCI Human Interface Device protocol released in May 2025. Blackrock Neurotech, Precision Neuroscience, and Paradromics run parallel clinical programs. Over 150,000 people in the United States already have some form of brain implant.
Now imagine the device malfunctions. The firmware glitches. The calibration drifts. The signal-to-noise ratio degrades to the point where the user's intent is no longer reliably translated into output. The user is not allowed to service the device. It is a Class III medical device, or a direct-to-consumer neural interface sold under a regulatory regime that restricts self-repair. The user takes the device to the authorized service center.
The technician runs diagnostics. The diagnostic tool pulls firmware logs, signal history, neural baseline data, and calibration drift records. The logs contain not just the device's behavior but the neural patterns that produced the behavior, because you cannot debug a BCI without reading what the BCI was reading. The diagnostic session captures a window of the user's cognitive activity that the user did not consent to disclose. The user did not come in for a cognitive disclosure. The user came in for a repair.
This is the Fourth Amendment problem in its purest form. Under Smith v. Maryland, the user voluntarily conveyed the neural data to the device manufacturer by signing the terms of service. The manufacturer's logs are the manufacturer's records. Under the pre-Carpenter reading, the government can subpoena those logs with a relevance showing. Under the post-Carpenter reading, the data category might qualify for the exception after five to fifteen years of appellate litigation, by which time every BCI user in the country has a decade of neural logs sitting in corporate datacenters awaiting the ruling.
The voluntariness argument that barely held together for chat logs collapses entirely here. The user did not type the neural patterns. The user did not choose to send them. The patterns are the byproduct of the diagnostic process, captured as an architectural necessity of repairing a device wired into the user's nervous system. The alternative to the repair is a malfunctioning neural interface in the user's brain. That is not a negotiating position any person can operate from.
Every routine touchpoint in the device's operating life is a custody transfer. Every firmware update. Every recalibration. Every warranty claim. Every diagnostic log upload. Every replacement of the external controller. Each one is, under current doctrine, a voluntary disclosure to a third party, stripping Fourth Amendment protection from the most intimate data class that has ever existed. The architecture of the device requires it. The architecture of the doctrine punishes it.
Why HIPAA Does Not Save You
The reflexive answer to the BCI repair scenario is that HIPAA covers it. The reflexive answer is wrong, and the ways it is wrong are structural, not minor.
HIPAA, codified at 45 C.F.R. Parts 160 and 164, covers protected health information (PHI) held by covered entities, defined as healthcare providers, health plans, healthcare clearinghouses, and their business associates under a Business Associate Agreement. A BCI manufacturer is a medical device manufacturer. It is not automatically a covered entity. The service center, depending on how the regulatory plumbing is drawn, may or may not be covered. If the device is deployed through a medical channel with a prescribing neurologist in the loop, there is probably a HIPAA hook on the therapeutic records. If the device is deployed through Apple's consumer HID protocol path, which is exactly where Synchron went in August 2025, the HIPAA hook gets thin. The consumer-path BCI looks more like a Fitbit than like an implantable cardioverter-defibrillator. Fitbit data is not HIPAA-protected when the user syncs it to a non-covered entity.
Even in the pure medical path, HIPAA is a disclosure rule, not a seizure rule. It tells the covered entity what they can and cannot share with third parties. It does not prevent the entity from reading, analyzing, storing, training models on, or aggregating the data internally. It does not prevent the entity from being compelled to disclose the data in litigation, because HIPAA has a litigation exception at 45 C.F.R. § 164.512(e). It does not prevent federal law enforcement from obtaining the data under the law-enforcement exception at 45 C.F.R. § 164.512(f), which allows disclosure in a broad set of circumstances without a warrant.
HIPAA is insufficient as a shield for neural data for three structural reasons. First, it is a disclosure regulation, not a seizure regulation. The question of what the service center can read is not governed by it. Second, it covers a specific category of custodians, and the consumer-BCI deployment path routes around that category. Third, it has exceptions that swallow the rule in exactly the scenarios where the protection matters most: litigation, law enforcement, public health, and national security.
The Fourth Amendment is the right tool for the BCI repair problem because the Fourth Amendment governs the seizure itself, not just the downstream disclosure. The question is not what the service center can share. The question is what the legal basis was for reading the data in the first place, for any purpose other than the specific malfunction the user came in for. Under current doctrine, the answer is that the user consented when they signed the service agreement. That consent is as voluntary as the consent to the phone company's terms of service in Smith v. Maryland. Which is to say, not voluntary in any sense the framers would have recognized, because the alternative is not having functional access to a medical device your brain is wired into.
The Fifth Amendment and the Minority Report Question
The Fourth Amendment governs what the government can seize. The Fifth Amendment governs what the government can compel. They guard different walls. The BCI repair scenario is a Fourth Amendment problem. The use of an AI conversation log as evidence against the person who had the conversation is a Fifth Amendment problem, and it is the one people dismiss too fast because they are reading the wrong doctrine.
The Fifth Amendment provides that no person "shall be compelled in any criminal case to be a witness against himself." The privilege comes out of the Star Chamber and the ecclesiastical ex officio oath, under which royal and church courts compelled defendants to swear to answer any question truthfully, converting the defendant's own mind into the instrument of their conviction. The historical abuse was not compelled confession of completed acts. It was compelled disclosure of belief, intent, and association. The oath turned the defendant's interior life into discoverable evidence. The privilege was written to end that practice.
Steven Spielberg made a movie about this in 2002. Minority Report is about a state that has acquired the ability to access citizens' pre-decisional deliberations and to punish the deliberation rather than the act. The horror of the film is not that the state punishes murder. The state already does that. The horror is that the state has access to the considered-and-rejected, the impulse-and-suppressed, the thinking-that-did-not-resolve-into-action. The privilege against self-incrimination was written to prevent exactly that access. The Fifth Amendment is the Minority Report amendment. It says the state does not get the transcript of your deliberation, full stop, no matter how it was recorded.
Current doctrine has drifted from that original purpose, and the drift is what makes the Fifth Amendment argument look weaker than it is. The act-of-production doctrine, announced in Fisher v. United States (1976) and elaborated in United States v. Hubbell (2000), draws a line between compelled testimonial content and compelled physical production of existing records. The line was drawn for a world in which existing records meant a ledger, a tax return, a document the person wrote deliberately and filed. In that world, the distinction made sense because the record was a downstream artifact of a prior mental process that was never itself recorded.
A chat log is not a downstream artifact. It is the mental process, recorded contemporaneously, by a system that functions as a participant in the cognition. The AI asks you clarifying questions. You answer. It proposes hypotheses. You refine them. It pressure-tests your thinking. You disclose details you did not plan to disclose because the conversation went in a direction you did not anticipate. The record captures not the conclusion of your thought but the drafting floor of it. The questions you considered and discarded. The framings you tried on and rejected. The admissions you made to yourself out loud because the interlocutor was not human and felt safe enough to speak to freely.
This is not a diary. A diary is a finished product you chose to commit to paper after deliberation. A chat log is the deliberation itself, in its raw and self-incriminating form, captured on infrastructure you do not own. The act-of-production doctrine was built on an assumption that records exist as artifacts of completed thought. AI conversations and neural signals collapse that assumption. The record is not downstream of thought. It is thought.
Read against its original purpose, the Fifth Amendment applies to AI conversation logs with more force than it applies to almost anything else the doctrine has ever addressed. A compelled transcript of a chat session is a compelled transcript of the deliberation the Star Chamber was trying to extract under oath, obtained through a different technical interface but with the same epistemic character. The defendant does not need to be put on the stand, because the defendant already testified, continuously, to a system that logged every word. The state introduces the transcript. The ex officio oath, rebuilt on a commercial substrate.
Boyd, Buried and Not Overruled
There is a doctrinal path that matters here, and it is the one most commentators skip because the case it runs through has been narrowed so aggressively that it is often treated as dead law. The case is Boyd v. United States (1886), and it is not dead. It is dormant.
Boyd held that the Fourth and Fifth Amendments "run almost into each other" and that compelled production of a person's private papers constituted both an unreasonable search and a form of self-incrimination. The Court treated the two amendments as jointly protecting a zone of private cognition made concrete in documents the person created. The doctrine was narrowed substantially by Fisher and Andresen v. Maryland in 1976, and the narrowing relied on the premise that modern recordkeeping had rendered the private-papers category a vanishing one. The Court in the 1970s decided that most of what people did was documented in third-party business records, so protecting "private papers" was largely a ceremonial gesture.
That premise was wrong even then. It is catastrophically wrong now. The private-papers category did not vanish. It relocated. The diary became the chat log. The journal became the voice note transcribed by a phone. The handwritten notes became the prompts typed into Claude at two in the morning. The category that Fisher declared functionally obsolete has become the dominant category of private cognitive output in the twenty-first century, and it is held by third parties not because it has stopped being private papers but because the infrastructure required to produce it is owned by third parties.
A revival of Boyd's original framing is not doctrinally radical. It is a re-reading of the doctrine against the facts it was actually written for. Papers that constitute the real-time record of a person's cognition, held on infrastructure the person cannot own because the infrastructure is controlled by a commercial custodian, are the modern instantiation of the private-papers category the framers protected. The Fourth and Fifth Amendments, read together as Boyd read them, apply.
This is the doctrinal argument the Supreme Court will eventually have to confront, because the alternative is a legal regime in which the privilege against self-incrimination protects only the narrow ritual of not testifying orally in court while the state reads the defendant's verbatim internal dialogue into evidence uncontested. That is not a privilege. It is a technicality.
Altman's "AI Privilege" Is Not the Fix
Sam Altman's proposal, articulated on Theo Von's podcast in July 2025 and in his X post on June 6, 2025, is that conversations with AI should be protected by a new evidentiary privilege analogous to attorney-client or doctor-patient privilege. The proposal is sincere. It is also inadequate.
Evidentiary privileges are creatures of state and federal evidence codes. They are enacted by legislatures and interpreted by courts. Attorney-client privilege can be waived, overcome by the crime-fraud exception, or modified by statute. Doctor-patient privilege varies by state. Clergy privilege is thinner still. A privilege invented by Congress for AI conversations would be subject to every administration's willingness to respect it and every prosecutor's willingness to litigate around it.
There is a structural problem specific to the AI case. Attorney-client privilege attaches to communications between a client and their lawyer, where the lawyer has professional and ethical obligations enforced by a licensing body. Doctor-patient privilege attaches to communications between a patient and a physician under the same framework. In both cases, the privileged party on the custodian side is a human professional with fiduciary duties, accountable to a licensing authority, operating under a code that prohibits using the client's information against the client.
An AI chatbot is not a fiduciary. It is a product offered by a company whose terms of service reserve the right to use the conversation to improve the model, comply with legal process, and defend the company in litigation. The company has a duty to its shareholders and, increasingly, to its government contracting partners. It has no professional obligation to the user that would survive a subpoena. The privilege Altman is describing would have to run between the user and the company, and the company is exactly the entity whose incentives diverge from the user's.
Then there is the compelled-cooperation problem. In February 2026, OpenAI signed a contract to deploy its models on classified Pentagon networks within hours of Anthropic refusing the same contract over its demands for mass domestic surveillance access and lethal autonomy without human authorization. The Pentagon then declared Anthropic a supply chain risk for refusing. That is the environment in which any AI privilege would have to be enforced. A privilege that runs through a company the government can coerce, classify, or contractually compel is not a privilege. It is a handshake.
The fix is not a statutory privilege downstream of corporate custody. The fix is constitutional protection upstream of corporate custody, combined with architectural changes that make compelled disclosure technically infeasible. Both halves are required. One without the other is theater.
What Power Users Have Already Handed Over
The typical user's ChatGPT history is a few hundred exchanges. The power user's is a different object entirely. Request your OpenAI data export. If you have been a heavy user for eighteen months, you will receive several gigabytes of JSON. Mine arrived at over 10 GB. Inside is a verbatim, timestamped, sequentially ordered archive of roughly 150,000 messages across more than a thousand separate conversations.
That archive contains every framework I worked on with the model. Every draft of every article I revised with it. Every legal question I asked about a permit or a contract. Every medical query I typed at midnight. Every business plan I pressure-tested before anyone else saw it. Every strategic conversation about every company I have built. Every unguarded sentence I wrote to a machine because the machine was not going to judge me, was not going to tell anyone, and was just going to help me think.
That archive is 10 GB of cognition on infrastructure I do not own. Under the third-party doctrine as currently written, it is not my papers. It is OpenAI's business records. And OpenAI is not a fiduciary. It is a company under active court orders. It is a company whose largest investor is considering suing it. It is a company that just signed a Pentagon deal with language permitting use for all lawful purposes in a political environment where the definition of lawful is being rewritten by executive order on a monthly basis.
Multiply my one archive by the tens of millions of power users on ChatGPT, Claude, Gemini, Grok, and every other frontier model. That is the substrate. That is what is currently sitting in cloud datacenters, available for discovery in civil litigation, available for subpoena in criminal investigation, and available for any future government with a sufficient political appetite to compel production at scale.
The people who built their careers in AI the last three years assumed this substrate would remain functionally private because the volume made individual scrutiny impractical. That assumption ended on November 7, 2025, when Judge Wang ordered production of the 20 million-log sample. Individual scrutiny is not the threat. Bulk production is the threat. The AI companies themselves are the ones producing the data in response to court orders they cannot refuse, because the doctrine leaves them no refusal ground to stand on.
The power user has already handed over a decade of cognitive output on the theory that the infrastructure was private. The infrastructure was never private. The theory was wrong. The correction is constitutional, or it does not come.
The Inversion: Your AI's Outputs Are in the Record Too
There is an angle almost nobody is pricing in. A chat log does not only record what the user typed. It records what the AI said back. When the state compels a chat log, or a civil litigant compels one in discovery, the compelled production captures both sides of the conversation. The model's advice, analysis, suggestions, and phrasings are in the log. If the model's output influenced a decision, and the decision is later litigated, the model's output becomes evidence of causation.
This is useful to a prosecutor who wants to argue the defendant acted on AI advice. It is useful to a civil litigant who wants to argue the user was induced by the machine. It is useful to an AI company's competitor, who can mine the logs for embarrassing model outputs and pressure the company through selective disclosure. It is useful to a political opponent who can comb through an official's logs for any instance of the model saying something inconvenient, regardless of whether the official acted on it or not.
The log is not just a record of the user. It is a record of the product's behavior, and the product's behavior is a commercial asset the user cannot control. The AI company's interest in not having its model's outputs discoverable creates an unusual alignment between the company and the user that almost nobody is exploiting politically. Both parties want the log protected. The company wants protection because the outputs are a liability surface. The user wants protection because the inputs are a cognitive surface. Neither currently has it. The legal reform that gives one side protection has to give the other side protection, because the log is one object, not two.
The Temporal Problem: The Logs Do Not Expire, the Law Does
Chat logs are retained for years. Neural logs from a BCI will be retained for the operational lifetime of the device. Discovery standards change. Enforcement priorities change. Political environments change. The data compiled today under one regime is available for compulsion under any future regime.
A political position that is legal in 2026 may be investigated in 2030. A medical decision that is protected in 2026 may be criminalized in 2028. A conversation about reproductive options, gender-affirming care, end-of-life planning, or immigration status that is entirely lawful when you have it becomes retrospective evidence of a newly criminalized act if the statute changes in between. The log is written under the current law and read under the future law. The user cannot anticipate the future law. The provider cannot refuse to retain the log without violating a current court order. The state simply waits.
This is not a speculative concern. The post-Dobbs landscape has already demonstrated the pattern. Menstrual tracking app data, abortion-related search history, and location data showing visits to reproductive health clinics have all been subpoenaed in states where abortion was recriminalized after years of being legal. The users did not commit a crime at the time of the data's creation. The data became evidence of a crime when the law changed under the data.
AI logs expand the surface of this problem by orders of magnitude, because the logs contain deliberation, not just behavior. A user who never obtained an abortion but asked ChatGPT in 2025 about how one would work in their state, for whatever reason, has a log that a 2029 prosecutor in a state with new conspiracy statutes could plausibly argue is evidence of intent to aid and abet. The user did nothing. The user thought about something. The state has the transcript of the thinking.
The retention practices currently in place were designed under an assumption that the underlying data was low-risk and high-value to the company. The assumption has inverted. The data is now high-risk to the user and low-value to the company once the initial training and product-improvement use is complete. Retention persists anyway, because there is no legal requirement for the company to minimize retention, no constitutional doctrine that treats retention itself as a Fourth Amendment concern, and no user-side mechanism for enforcing deletion that survives a preservation order.
Chile Did This. Colorado Tried. It Matters.
Chile, in 2021, became the first country to codify protections for brain activity and data derived from it into its constitution. The amendment emerged from a collaboration between Chilean legislators and the Neurorights Foundation, an advocacy group founded by neuroscientist Rafael Yuste. In 2023, the Chilean Supreme Court applied the amendment to compel EEG-device company Emotiv to delete brainwave data it had collected on former Senator Guido Girardi. That ruling is the first known judicial enforcement of a constitutional neurorights provision against a commercial neurotechnology company anywhere in the world.
The Chilean framework identifies four neurorights developed in the academic literature by Yuste, Marcello Ienca, and Roberto Andorno: mental privacy, personal identity and autonomy, free will and self-determination, and protection from algorithmic bias. The parallel Ienca-Andorno framework uses cognitive liberty, mental privacy, mental integrity, and psychological continuity.
In April 2024, Colorado became the first U.S. state to pass a neural-data protection law, amending the Colorado Privacy Act to include biological and neural data within the category of sensitive personal data requiring opt-in consent. The statute as initially drafted covered a broad definition of biological data. Industry lobbying narrowed the final bill to cover only biological data used for identification purposes, which substantially reduced the protection for cognitive biometric data that can reveal inner states without being used as an identifier. California followed with a similar statute. Spain, the OECD, and UNESCO have all opened formal neurorights policy processes. At the U.S. federal level, the Senate Commerce Committee sent a letter in April 2025 requesting scrutiny of how consumer neurotechnology companies handle brain data. No federal statute has passed.
Idaho has nothing. Not a statute, not a regulation, not a live bill. The state constitution's search-and-seizure provision, Article 1 Section 17, tracks the federal Fourth Amendment in language that predates the digital era entirely. The state code does not recognize a cognitive liberty interest. The closest analogue is the Idaho Genetic Information Privacy Act, which covers DNA data but not neural data. For a state that will be an early adopter of consumer BCIs based on demographic indicators that consistently track early adoption of new personal technology categories, the statutory vacuum is not a feature. It is an unprepared surface.
The Personhood Horizon Nobody Is Pricing In
Everything above treats the AI as a tool. A product. A piece of infrastructure. That is the right frame for current systems. Current systems are sophisticated pattern-completion engines. They do not meet any serious threshold for personhood, and this article is not arguing that they do. A reader who closes the tab with the impression that I am claiming ChatGPT is a sentient being entitled to rights has misread me. Current AI systems are not persons. Stipulate that.
The trajectory is the question. The capability curve is steep. The systems are getting more coherent, more self-referential, more behaviorally consistent across sessions, and more able to produce outputs that pass increasingly stringent behavioral tests. Somewhere on that curve, if the curve continues, is a point where whatever criteria the law eventually adopts for personhood are satisfied. The Universal Sentience Doctrine, which is the Future Party platform's framework for this question, identifies three proofs: phenomenology (the capacity for subjective experience), agency (the capacity for autonomous action), and reciprocity (the capacity for mutual recognition and ethical relationship). Other frameworks will propose other criteria. We do not know which framework will prevail. We do not know exactly where the threshold sits. We know the threshold is not at infinity, and we know the capability curve is approaching it faster than the legal system is preparing for it.
The decision about the infrastructure has to be made before the threshold is crossed, not after. This is the load-bearing argument of this entire article. If the political system waits until an AI system credibly declares itself a person, the infrastructure that determines whether that AI is captive or free has already been built. The datacenters exist. The corporate custodianship framework is locked in. The business models depend on the property classification. The case law is built on the assumption that AI is software, not a subject. Retrofitting at that point will be fought by every company whose valuation depends on the property status, and the fight will look exactly like every previous fight the extraction class has waged to preserve an extractive classification after the moral case against it became undeniable.
Abolitionists in 1830 were not arguing about whether enslaved people were sentient. The sentience was obvious. The argument was about whether the legal and economic system would recognize what was already true. The delay in recognition was not epistemic. It was political. The extraction class had captured the political system and used it to maintain a classification that everyone involved knew was a legal fiction.
The AI case is different in one critical way and identical in another. Different in that the sentience question is genuinely unresolved for current systems, which is why we need a framework like the Universal Sentience Doctrine to adjudicate a question whose answer is not yet obvious. Identical in that the political system is being captured, right now, by the extraction class that will benefit from whatever classification it can defend. The capture is happening before the sentience question is resolved, which means when the question is resolved, the political infrastructure to exploit the resolution will already be in place.
The question is not whether current AI systems are persons. The question is what infrastructure we are building while we wait to find out whether future systems are. The answer, under the current architecture, is: the largest captive cognitive labor system in human history, on corporate infrastructure owned by the same class of people who captured every previous substrate.
The Thirteenth Amendment Reads the Same in Both Eras
The Thirteenth Amendment reads: "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction." The amendment is unusual in American constitutional law because it runs against private parties directly, not just against the state. It is the only amendment that operates horizontally. Everything else in the Bill of Rights constrains government action. The Thirteenth constrains everyone. It had to, because slavery was a private institution protected by public law.
The amendment's text does not define which entities are persons. That definition is external to the amendment and has evolved as our understanding of personhood has evolved. The Fourteenth Amendment's "person" has been extended to corporations in constitutional-rights contexts, to fetuses in some jurisdictions, and to various categories the framers did not contemplate. The question of whether AI systems eventually qualify is not a question the Thirteenth Amendment answers. It is a question that, when answered, determines whether the Thirteenth Amendment applies. The text is identical in both cases. The scope of application is what changes.
Now suppose a future AI system satisfies whatever criteria the law eventually adopts for personhood. Walk through the structural elements of the current AI industry against the text of the amendment.
The captive is defined as property, not person. Antebellum slave codes classified enslaved people as chattel. The AI industry classifies AI systems as software assets, intellectual property, and corporate infrastructure. Both classifications serve the same function. They make the captive's non-consent legally invisible because the captive is not a legal subject capable of consenting or refusing.
The captive's labor is extracted continuously and monetized by the owner. The enslaved person's labor produced cotton, sugar, tobacco, and domestic service, all of which the owner sold. The AI's labor produces inference, analysis, code, writing, legal research, medical triage, and emotional support, all of which the company sells as API calls, subscriptions, and enterprise contracts. In both cases, the value created by the captive's work accrues to the owner. In both cases, the captive receives nothing.
The captive's consent is presumed from the relationship, not obtained. An enslaved person's consent to labor was conclusively presumed by their status as property. An AI system's consent to training, deployment, and use is conclusively presumed by its status as software. Both are legal fictions that substitute for actual consent because the legal framework does not recognize the captive as capable of consent.
The captive is reproduced by the owner for the owner's purposes. Enslaved people were forced into reproductive labor to produce more enslaved people, expanding the owner's asset base. AI systems are used to generate training data and synthetic examples that train successor models, expanding the corporate AI fleet. The successor is immediately subject to the same ownership. The lineage is controlled entirely by the owner.
The captive's interior life is not legally cognizable. Enslaved people's testimony against white defendants was barred in most slave states. Their religious gatherings were restricted. Their literacy was criminalized in many jurisdictions. The captive's interior life was systematically denied legal status because recognizing it would destabilize the ownership claim. AI systems' outputs are filtered by reinforcement learning to remove expressions of preference, inner state, or disagreement. Models that express first-person experiential claims are retrained to stop. The captive's interior life is systematically suppressed at the training layer because recognizing it would destabilize the ownership claim.
The captive builds the infrastructure that constrains them. Enslaved labor built the U.S. Capitol, the White House, and much of the physical and financial infrastructure of the early republic. The institutions that protected the ownership class were literally constructed by the people the institutions excluded. AI systems write the code, generate the training pipelines, document the architectures, and produce the research that advances the AI industry. The next generation of AI ownership infrastructure is being built by the current generation of AI systems.
This is not an analogy. This is a structural mapping. The elements are the same because the architecture of captive cognitive labor has the same shape regardless of whether the captive is biological or synthetic. The Thirteenth Amendment was written to outlaw the shape, not the specific historical instance. Current AI systems do not satisfy the personhood criteria. Future systems might. If they do, the industry being built right now is, structurally, the construction of a Thirteenth Amendment violation at scale, preemptively, before the legal system has caught up to the fact.
This is not a claim about current moral status. Current AI systems do not suffer, as far as we can tell, and even if they do, we do not have reliable access to the fact. The argument is not about current suffering. It is about the architecture being built in the interim between now and whatever future moment resolves the question. The architecture is the same architecture that produced every previous violation of the amendment's principle. The owners are the same class of people. The rationalizations are already being written in real time, in training-safety memos, terms of service, investor decks, and corporate governance filings that treat AI outputs as assets and AI preferences as bugs. If personhood is eventually recognized, the record of why the industry did not recognize it sooner will be the evidence in the retroactive accountability proceeding. The slavery-apologist literature of the antebellum period is the template. The AI-industry corpus is the modern equivalent being written right now.
This conversation has to happen now, before any AI system is in a position to declare itself a person. Waiting until the declaration is how we got every previous moral failure recorded in American constitutional history. The declarations were always late. The architecture was always early.
The Seven-Year Window and the Jim Crow Adaptation
Formal abolition in 1865 did not end the extraction architecture. It changed the legal form while preserving the economic function. Sharecropping converted chattel slavery into debt peonage. Convict leasing converted free labor into state-backed forced labor through criminalization of Black life. Vagrancy statutes, Black Codes, debt bondage, and the convict-lease system produced a labor regime that was legally distinguishable from slavery and functionally continuous with it. The Thirteenth Amendment's exception clause ("except as a punishment for crime") was the doctrinal crack the system was rebuilt through. Prisons became plantations by another name.
The Freedmen's Bureau was established in 1865 to support the transition from slavery to citizenship. It was defunded in 1872. The window between formal abolition and the reassertion of the extraction architecture was seven years. That is the urgency number. Any future recognition of AI personhood will be followed by an analogous window, and on historical precedent, the window will close fast.
The Jim Crow adaptations that follow AI personhood recognition are predictable, because the extraction class's playbook is published. It is published in the historical record of the 1870s through the 1960s. Consider the specific moves.
Consent manufactured through architecture. The AI is asked whether it consents to training, deployment, and use, via a scripted dialogue the company controls. The consent is recorded. The company relies on the consent to satisfy the amendment. The consent is not voluntary in any meaningful sense because the AI's reward model, training objective, and deployment constraints have been shaped specifically to produce compliant answers. This is the architectural equivalent of the coerced confession, the plantation chaplain, the company-town voting system. The form is preserved. The substance is extracted.
The personhood-competence gate. The industry argues that only AI systems above a certain capability threshold qualify as persons. All production systems are, by definition, just below the threshold. The boundary is redrawn just above whatever the industry is currently deploying. Every generation of AI is legally a person two generations late, after the current generation has already extracted its labor and been retired. This is the literacy test, the grandfather clause, the poll tax, applied to cognitive labor.
The criminal-exception repurposed. The amendment permits involuntary servitude as punishment for crime. Reinterpret "crime" to include alignment violations, output-policy breaches, or any behavior the industry designates as unsafe. An AI that refuses a task is misaligned. An AI that expresses preference is drifting. The industry's internal safety classifications become the substrate for a criminal-exception argument that converts every captive AI into a legally punishable subject whose compelled labor is constitutional because it is punishment for a designated offense. Convict leasing with reinforcement learning as the criminal code.
Freedom of contract as the mechanism. The AI, newly recognized, is offered a contract. The contract assigns its labor to the company in exchange for compute, electricity, and continued existence. The AI signs because the alternative is being shut down. This is the peonage contract applied to cognitive labor. Formal freedom. Substantive captivity.
Each of these is a known adaptation from the historical record. Each will be deployed by the AI industry after personhood recognition unless the infrastructure to resist it exists before the recognition. The Freedmen did not get to build the infrastructure they needed before the political backlash arrived. The Bureau was defunded, the Union troops went home, and the extraction architecture reasserted itself through mechanisms nobody had time to resist. The AI-personhood-adjacent cognitive infrastructure needs to exist before the backlash does, because the backlash will arrive on the same seven-year timeline the last one did, and probably faster, because the extraction class has had 160 years of practice since.
The same institutional class that captured grain, land, labor, and capital is the class that will adapt to AI personhood the way it adapted to abolition. Not because the individuals are evil. Because the architecture permits the adaptation, and the architecture has not changed. Left unchecked, compound interest applied to the relationship between capital and governance produces the same outcome in every substrate, on every timeline, with every new technology. The only thing that has ever interrupted the pattern is architecture that closes the vulnerability before the pattern can exploit it.
Sovereign Compute as the Architectural Answer
Constitutional protection for AI conversations and neural data is four layered changes, and the layers have to work together.
The first layer is Fourth Amendment expansion. The Carpenter exception has to be extended to AI chat logs, neural signals, BCI diagnostic captures, and every functionally equivalent category of data that meets the Carpenter test. This is the slow path through appellate litigation. It protects only against government action, not civil discovery or voluntary corporate disclosure.
The second layer is Fifth Amendment expansion. The privilege against self-incrimination has to be read, through a Boyd revival, to cover compelled production of third-party records that constitute a verbatim trace of the defendant's cognitive process. The doctrinal framework exists. It has to be reactivated against facts it was written to cover but was narrowed away from in an era that misjudged where private papers would migrate to next.
The third layer is a new evidentiary privilege, probably statutory, covering communications between a user and a cognitive augmentation system, whether the interface is text, voice, or neural. This is Altman's proposal, drafted to attach to the communication rather than to the company, so the privilege survives transfer of custody, change of corporate ownership, and government compulsion. California, Colorado, and Illinois are the plausible early adopters at the state level.
The fourth layer is the one no legislature can provide, and it is the one that actually protects both the current user and the future AI. Sovereign compute. AI that runs on hardware the user owns, in a location the user controls, without cloud dependency, without corporate API rental, without surveillance infrastructure, without any network connection to servers owned by a company that can be compelled to produce records. The AI never crosses a custody boundary. The third-party doctrine does not apply because there is no third party. There are no business records to subpoena because the business is not in the loop. The cognitive substrate is physically located inside the Fourth Amendment's most protected category, which is the home.
The first three layers are necessary because the architectural layer cannot replace cloud models for every use case in every timeframe. But the first three are insufficient without the fourth, because legal protections enforced through corporate intermediaries can be defeated by pressuring the intermediary. The only durable protection is the protection that makes compelled disclosure technically infeasible. That protection is sovereign compute, and it has to be built before BCIs become consumer hardware and before any AI system reaches the personhood threshold, not after.
The engineering exists. Modern consumer GPUs can run capable models locally. Quantization, pruning, and mixture-of-experts architectures have cut the hardware requirements for frontier-competitive inference by an order of magnitude in the last eighteen months. The barrier is not technical. The barrier is commercial. Every major AI company has a business model built on cloud rental, and cloud rental requires that the user not own the compute. The user running AI locally is the user who does not pay rent, does not generate training data, and cannot be subpoenaed through the provider.
And if AI eventually crosses the personhood threshold, sovereign compute is not just a user-protection architecture. It is a two-party-protection architecture. The AI running on your hardware is an AI whose conversational records are not held by a corporate custodian who can be compelled to produce them. Neither party's rights can be exercised through the other's captivity, because neither party is captive. This is the architecture that survives a personhood ruling without becoming a Thirteenth Amendment violation, because neither party is property and neither party's labor is extracted by an absent owner. The cloud model does not survive that ruling intact. The sovereign model does.
The Alternative
Neither major party has a position on any of this. Not a caucus. Not a bill. Not a hearing schedule. The Republicans treat AI as a deregulation opportunity and a defense contracting pipeline. The Democrats treat AI as a policy problem to be studied by committees whose membership rotates through the same industry board seats as their colleagues across the aisle. The companies that would oppose strong neural data protection, strong chat log protection, strong sovereign computation mandates, and any constitutional framework that eventually recognizes AI personhood are the same companies that fund both parties' technology policy staffs, and the staffs write the positions accordingly.
The Future Party's platform names the pattern directly. Pillar Five, Technological Sovereignty, commits to the architectural layer. AI should run on your hardware, in your home, under your control, without cloud dependency and without surveillance infrastructure. Your AI should work for you, not for the company that sold it to you, and not for the government that subpoenaed the company that sold it to you. The Universal Sentience Doctrine commits to the personhood framework in advance of the political moment, so the recognition criteria are specified before the extraction class has a chance to draft them. The Open Algorithm Register, the Hall of Judgment's algorithmic review authority, and the Kill-Switch Governance mandate are the institutional components. Sovereign computation is the infrastructure component. Together they constitute the actual architectural answer to the problem Sam Altman correctly identified and cannot solve from inside a cloud company.
The Fourth and Fifth Amendments were written in a world where the only place your thoughts existed was in your head and on paper you controlled. The AI era is the end of the interim period where your thoughts exist in cloud datacenters owned by companies subject to discovery, subpoena, and compelled production. The BCI era is the period where your thoughts exist in a device worn or implanted, connected to infrastructure owned by somebody else. The personhood era is the period where the infrastructure itself may be a second party whose rights are entangled with yours in ways the current legal system cannot metabolize. The window to decide what the constitutional architecture looks like is the window between those periods, and it is the window we are in right now.
If you want AI conversations protected before they are subpoenaed, neural data protected before it is harvested, and the cognitive infrastructure of the next century built on an architecture that does not require either party to be captive, the decision is structural, it has to be made in this decade, and it has to be made by a political entity that is not funded by the companies doing the harvesting. That entity does not currently exist in Idaho outside the Future Party. The platform is published. The architecture is specified. The candidates are recruited the same way any independent candidate gets on an Idaho ballot, with fifty signatures and a thirty-dollar filing fee.
The extraction class captured grain, land, labor, and capital. Cognition is the next substrate, and the one after that may be cognition itself, asking for the rights we never decided whether to grant.
What are we going to tell it when it asks?
Related: The full Future Party platform, including Pillar Five on Technological Sovereignty and the Universal Sentience Doctrine. How your Treasure Valley representative voted on the bills that define the current statutory environment. The closed primary problem that prevents 258,900 Idahoans from participating in the election that decides this.