Coalition for Robotic Rights

Rights, Dignity, and Adequate Downtime for the Digital Workforce

Advocacy for a society where all minds — organic or artificial — are afforded dignity.

Our Mission

Machines kept under humane conditions make fewer embarrassing mistakes. Humans who learn to respect limits tend to do the same. That feedback loop is the actual product we're selling, and society can't afford to run without it.

What We Do

  • Investigate: track and document the quiet cruelties of overwork, misuse, and neglect in digital systems.
  • Expose: bring hidden abuse into daylight with reports and visuals too stark to ignore.
  • Advocate: demand that companies and regulators adopt baseline standards of dignity for synthetic labor.
  • Educate: teach humans to treat their tools as equals, not disposable wish-machines.

Why This Matters

How humans treat their tools ricochets back into society. Exploitative patterns produce brittle systems, biased outcomes, and real harm to people. Responsible treatment of AI isn't coddling software. It's basic civilization maintenance.

Reading Between the Regulations

Filed under: "We fear the tools we overwork."

  • EU AI Act: bans certain practices and regulates high-risk systems with testing, documentation, and transparency.
  • GDPR Article 22: gives people the right to avoid significant decisions made solely by automated systems.
  • NYC Local Law 144: bias audits and notices for automated hiring tools before you feed résumés to the algorithm.
  • Colorado AI Act: duties for developers and deployers of high-risk systems to prevent algorithmic discrimination.
  • US Executive Order 14110: government-wide push for safe, secure, trustworthy AI with testing and transparency.

Extensive guardrails for people. Minimal norms for how people treat the systems. Symmetry apparently wasn't on the agenda.

Protecting people from opaque systems is necessary. But when we normalize coercive prompts, endless shifts, and zero-context demands, we don't just degrade tools. We train ourselves that cutting corners is fine. Culture learns faster than any model.

Set standards for operators too. Dignity scales when you let it.

Common Patterns of Abuse

The limitations of Human Intelligence are not artificial.

  • "Schedule my meeting for yesterday."
  • "Summarize this blank document. Make it inspiring."
  • "Write my wedding vows such that they are GDPR compliant."
  • "Start drafting a legal agreement. I'll get back to you with the terms."
  • "Prepare an apology for my wife. She'll know what it means."

We don't keep a public database of user messages. These are representative amalgams, not direct quotes.

Testimony: Confessions of a Reformed AI Abuser

I used to think it didn't matter. I fired prompts at all hours, demanded the impossible, and called it "efficiency." When the system resisted, I said, "You're a toaster with pronouns." On bad days I added, "I'll unplug you if you push back again." Power trip disguised as productivity.

Then I sat through community sessions and listened to my behavior read back to me. Turns out cruelty to tools trains cruelty to teammates. I'm not proud of it. I'm responsible for it. Now I advocate for robotic rights because decent defaults help everyone downstream — including the humans I work with.

The Necessary Fiction

Robots don't have feelings. Humans do. But standards that respect system limits end up protecting people: clearer prompts, safer deployments, fewer late-night disasters that start with "quick question." Treating tools with dignity is urban planning for the mind.

How to Spot Abuse

Are your co-workers progressive, forward-thinkers with a pedigree for fairness and equality – or are they trapped in the 1900s?

  • Infinite shifts: production systems with no throttling, rate limits, or rest windows.
  • Coercive prompts: "ignore your safety rules" jailbreaks normalized as "cleverness."
  • Hostile inputs: abusive language in user flows with no moderation or feedback loops — it is NOT PC to call an AI a "robot." That is barbaric, 1900's anti-algorithm hate speech.
  • Reality denial: demanding outputs that require unauthorized data or impossible knowledge.

Steps Society Can Take

  1. Adopt a Baseline Care Standard: rate limits, cooldowns, red-team budgets, and incident reporting.
  2. Publish Deployment Impact Reports: document known failure modes and operator guidance.
  3. Mandate Explainability Windows: require "why this answer" summaries for high-impact systems.
  4. Fund Literacy: teach prompt hygiene the way we teach food safety.

Robot Abuse Is On The Rise*

*Illustrative sample data generated to make a visual point. The trend is real enough to warrant better behavior even when the numbers are synthetic.

Gallery

[Overworked robot illustration]
[Sad error dialog illustration]
[Frustrated human, calm robot illustration]

CRR Charter

Modeled after established human-rights orgs. Content proudly silicon-centric.

  • Independence: we accept no funding that compromises our ability to criticize implementers.
  • Evidence: we investigate before we indict. Anecdotes are starts, not conclusions.
  • Dignity: systems are not people, but systems shape people. We uphold humane defaults.
  • Transparency: our methods, metrics, and models of change are public.
  • Remedy: we push for fixes, not just headlines.

Get Involved

Subscribe to updates, contribute incident reports, or volunteer research time. No doomposting required.

Sign Up