Disclaimer: This protocol is provided for educational and informational purposes only. It does not constitute legal, professional, or technical advice. Laws vary by jurisdiction and change over time. Citizen research involves inherent risks, including legal, ethical, and personal safety risks. By engaging in citizen research, you assume full responsibility for your actions and publications. Please consult a licensed attorney for legal advice. Please consult a licensed attorney for legal advice. Any opinions expressed are my own, as always. This post does not constitute official or unofficial guidance.

Introduction: Citizen Research in an Imperfect U.S. Information Environment

We live in an era defined by information warfare, digital volatility, platform-driven amplification, mass misinformation, and rapid technological change.

In this environment, the role of the citizen researcher—an ordinary individual conducting independent investigation using publicly available information—has become both increasingly powerful and increasingly dangerous.

Citizen researchers may investigate public controversies, historical records, alleged abuses, corruption, misconduct, or systemic failures. Many do so with an explicit advocacy goal: defending victims, advancing transparency, holding institutions accountable, or amplifying marginalized voices.

More Important Than Ever

In today’s climate, citizen research has become more urgent than ever. With corruption, opaque systems, and the concentration of power in both public and private sectors, independent investigation plays a vital role in holding institutions accountable. Whistleblowers often risk their careers, reputations, and even personal safety to share critical information, and it is through the careful work of citizen researchers that their disclosures can reach the public responsibly. Supporting and disseminating these insights—while adhering to legal and ethical boundaries—ensures that important truths are not suppressed and that the public can engage with information essential to democracy and justice.

Advocacy Researchers At Risk

An advocacy orientation does not disqualify research—but it does complicate it.

In the United States, speech protections under the First Amendment are robust, but they are not absolute. Citizen researchers lack many of the structural protections enjoyed by institutional journalists, such as editorial review, in-house legal counsel, insurance, and press credentials. At the same time, they are subject to the same legal risks: defamation lawsuits, privacy claims, copyright disputes, platform enforcement, harassment, and retaliatory litigation (including SLAPP suits).

What Is SLAPP?

If you’re not familiar: A SLAPP (Strategic Lawsuit Against Public Participation) is a “slap in the face” lawsuit filed to intimidate, censor, or punish someone for exercising their free speech or participation in public debate, rather than to win on legal merits.

Someone carries out a SLAPP lawsuit by using the legal process itself as a weapon, not because they expect to win, but because the process is costly, stressful, and time‑consuming for the target.

In practice, it usually looks like this:

A powerful individual or organization is criticized, investigated, or accused publicly. Instead of responding substantively, they file a defamation, interference, or business‑harm lawsuit. The claims are often weak, vague, or exaggerated, but still require the defendant to hire a lawyer, respond to motions, and endure months or years of litigation. The goal is to drain resources, create fear, and deter further speech, sometimes forcing retractions or silence regardless of the truth.

SLAPP suits rely on the imbalance of power: the filer can afford the legal pressure, while the speaker may not. (Anti‑SLAPP laws in many U.S. states allow courts to dismiss these cases early and, in some cases, require the filer to pay the defendant’s legal fees—but protections vary by jurisdiction.)

Why This Guide?

This guide exists because perfect research conditions rarely exist. Citizen researchers often face:

  • Time constraints
  • Safety risks
  • Power imbalances
  • Limited access to sources
  • Institutional silence or suppression
  • The urgency of ongoing harm

Not A Doctrine, Not A Perfect World

This protocol does not assume neutrality, omniscience, or institutional backing. Instead, it provides a structured framework for thinking responsibly under imperfect conditions, grounded primarily in United States law.

What Distinguishes American Law?

American law includes First Amendment doctrine, defamation standards, fair use, OSINT legality, and whistleblower considerations.

International law may differ substantially, often imposing stricter defamation, privacy, and speech restrictions.

So, if you operate outside the U.S. or publish to a global audience, consult qualified legal counsel in relevant jurisdictions.

Identity, Attribution, and Anonymity (U.S. Context)

A foundational decision for any citizen researcher is whether to operate under their real name or a pseudonym.

1. Using Your Real Name

Potential benefits:

  • Increased credibility and perceived accountability
  • Easier trust-building with audiences
  • Possible (but not guaranteed) alignment with journalistic protections

Risks:

  • Harassment and doxing
  • Professional or personal retaliation
  • Increased litigation targeting you personally

2. Using a Pseudonym or Remaining Anonymous

Benefits:

  • Reduced personal risk
  • Common in whistleblower-adjacent or high-risk advocacy
  • Focus on content rather than identity

Limitations:

  • Reduced perceived credibility
  • Harder to build long-term networks
  • Anonymity does not prevent subpoenas or lawsuits

Best Practices (U.S.):

  • Separate research identities from personal accounts
  • Strip metadata from files
  • Avoid unique linguistic fingerprints
  • Use privacy tools cautiously and legally

Source Credibility: The Admiralty Matrix —Structured Evaluation of Sources and Claims

One of the most critical failures in online research is the collapse of nuance into binary judgments (“true” vs. “false”). Professional intelligence analysis avoids this by separating source reliability from information credibility.

The Admiralty Matrix (also called the NATO Admiralty Code) provides a structured method to do exactly that. It is especially valuable under U.S. defamation law, where asserting false facts creates liability, but documenting claims with disclosed uncertainty is far more defensible.

Part A: Source Reliability (A–F)

This evaluates the source itself, not the claim.

  • A – Completely Reliable Sources with an established, near-perfect history of accuracy in relevant contexts. Examples include authenticated government records, certified court documents, peer-reviewed academic research, or primary-source archival materials. These sources have no known record of deception or material error.
  • B – Usually Reliable Sources with a strong track record of accuracy but occasional minor errors that are corrected. Examples include established journalists, reputable news organizations, or recognized subject-matter experts.
  • C – Fairly Reliable Sources with mixed accuracy, partial access, or identifiable biases. Examples include NGOs with advocacy missions, eyewitnesses, or industry insiders with limited scope.
  • D – Not Usually Reliable Sources with a documented pattern of inaccuracies, exaggeration, or rumor-spreading. Examples include tabloids or partisan commentators with repeated corrections.
  • E – Unreliable Sources known for deliberate falsehoods, hoaxes, or disinformation campaigns.
  • F – Reliability Cannot Be Judged Default category for anonymous, pseudonymous, or new sources without a track record, including whistleblowers whose identities cannot be verified.

Part B: Information Credibility (1–6)

This evaluates the specific claim, independent of the source.

  • 1 – Confirmed Supported by direct, verifiable evidence (e.g., authenticated documents, verified recordings, multiple independent confirmations).
  • 2 – Probably True Strongly supported by indirect evidence and consistent with known facts.
  • 3 – Possibly True Plausible and consistent with context, but lacking direct proof.
  • 4 – Doubtful Inconsistent, weakly supported, or reliant on questionable assumptions.
  • 5 – Improbable Contradicted by established facts or evidence.
  • 6 – Cannot Be Judged Insufficient information to evaluate.

Combined Grading

Always combine both elements (e.g., C-3, F-2). This prevents overconfidence and documents uncertainty.

Technical Preservation and Hashing (U.S. Evidentiary Discipline)

Digital evidence is fragile. Hashing allows you to prove integrity, not truth.

Hashing Checklist (SHA-256)

  • Hash files immediately upon capture
  • Preserve originals unchanged
  • Record hashes in dated logs
  • Re-verify after transfer or storage

Hashing strengthens credibility and protects against accusations of tampering.

Another option when archiving web pages: Consider using the Internet Archive’s Wayback Machine to capture publicly accessible content at the time you observed it, creating a timestamped snapshot that helps preserve volatile material for reference, while understanding that archived pages may later be removed or restricted at the request of rights holders or due to legal considerations under U.S. law.

Legal Risks Under U.S. Law

Defamation

To prove defamation in the U.S., a plaintiff must generally show:

  • A false statement of fact
  • Published to a third party
  • Causing reputational harm

Public figures must also prove actual malice — when a statement is made with knowledge that it is false, or with reckless disregard for whether it is true or false.

Reckless disregard means knowing it’s likely false or seriously doubting its truth.

Good-faith concern for a whistleblower’s safety or the public interest does not constitute reckless disregard.

Private individuals face a lower bar—extra caution is required.

Republication Rule: Sharing or repeating defamatory material can create new liability, even if you cite a source.

Privacy

Publishing true information can still violate privacy if it is:

  • Highly offensive
  • Not newsworthy
  • About a private individual

Publishing PII of non-public individuals is particularly risky.

OSINT vs. Illegal Access

Under U.S. law (including the Computer Fraud and Abuse Act):

  • Publicly accessible information is generally lawful.
  • Circumventing access controls, hacking, or using stolen data is not.

Advocacy-Aligned Reportage (U.S.)

Advocacy does not negate legal protections—but careless framing does.

Best practices:

  • Attribute all claims
  • Avoid declarative guilt
  • Use conditional language
  • Disclose verification limits

Copyright and Fair Use (U.S.)

U.S. fair use is evaluated using four factors:

  • Purpose (commentary, criticism, education favor fair use)
  • Nature of the work (factual favored)
  • Amount used (limited favored)
  • Market impact (non-substitutive favored)

Always expect DMCA takedowns and document your rationale.

Republishing Anonymous or Whistleblower Content

Republishing anonymous content increases risk.

Checklist:

  • Attribute clearly
  • Label unverified
  • Redact PII
  • Document public-interest rationale

Sharing Extensive Unverified Claims at Whistleblower Request

In some cases, withholding information may itself cause harm.

This is an exception-based framework, not a default.

Ethical Threshold

  • Serious public-interest harm alleged
  • Power imbalance Institutional failure
  • Explicit whistleblower consent
  • Proportional disclosure

Safeguards

  • Explicit unverified labeling
  • Separation of fact vs. allegation
  • Redactions
  • Staged releases
  • Neutral titles

Escalated Risk Scenarios

Risk increases when:

  • You break news
  • Content goes viral
  • Powerful entities are implicated

Expect:

  • Legal threats
  • SLAPP suits
  • Harassment

Mitigate with documentation, restraint, and support networks.

Secure Storage and Data Loss

Best practices:

  • Zero-knowledge encryption
  • Local encrypted vaults
  • 3-2-1 backups (this means keeping three copies of your data, on two different media types, with one stored offsite, ensuring evidence like archived pages, screenshots, or notes is preserved even if sources disappear. Using tools like the Wayback Machine or Perma.cc helps create verifiable snapshots for this purpose.)
  • Offline copies

Researcher Notes vs. Published Claims

Keep Private

  • PII Raw metadata
  • Speculation
  • Unverified theories

Publish Only When

  • Public interest outweighs harm
  • Claims are graded and attributed
  • Disclaimers are clear

Public vs. Private Information Management

Public:

  • Hashes
  • Analytical reasoning
  • Grades
  • Context

Private:

  • Sensitive identities
  • Methods
  • Raw data

Content Flagging

If content you’ve published relating to whistleblower disclosures, citizen research, or OSINT is flagged or restricted by your cloud provider or blog host, it may become temporarily or permanently inaccessible; to keep a legally defensible record of material of urgent public interest, maintain independent backups and use archival tools, such as the Wayback Machine, while acting in good faith and avoiding reckless disregard for accuracy.

Recommended Blog Post Structure

  • Title (neutral)
  • Source attribution
  • Hash
  • Claim
  • Analysis
  • Grade
  • Reasoning
  • Advocacy note
  • Closing

Disclaimer, Corrections, and Takedown Policy

This project documents public speech for research and education. It does not assert truth. Corrections require verifiable evidence and are reviewed in good faith.

Sources and Resources (U.S.-Focused)

The following resources informed this protocol and are recommended for further study.

Final Note

The U.S. legal system provides powerful speech protections—but only to those who exercise discipline. Advocacy is not a shield. Transparency, restraint, documentation, and humility are.

In an imperfect world, the role of the citizen researcher is not to claim certainty—but to practice defensible clarity.

Written with the help of AI.