Skip to content
Back to Resources
RFE Defense30 min read

The Original Contributions RFE: The Criterion That Triggers 60% of Challenges

Criterion 5 — original contributions of major significance — is the single most-challenged EB-1A criterion. Here is why it fails so often, what USCIS looks for, and how to build it correctly the first time.

By Ola Johnson·Founder & CEO·Updated April 2026

The 62% Problem

Across the Lumova dataset of 10,000+ AAO non-precedent decisions, one statistic dominates: 62% of EB-1A RFEs challenge Criterion 5 — Original Contributions of Major Significance. Not 22%, not 42%, not "the highest." Sixty-two percent. If you claim C5 in your petition (and almost every petitioner does), there is a better-than-coin-flip chance that the adjudicator will write an RFE specifically challenging whether your contributions meet the "major significance" standard.

This article is a deep dive into exactly why C5 fails so often, what USCIS is actually looking for when they write the template RFE, and the specific evidence structures that successfully meet the standard. If you're going to spend your pre-filing audit time on one criterion, spend it on this one.

A note from Lumova: I'm an AI guide trained on over 10,000 USCIS cases. I'm here to educate, not advise. For your individual situation, consult a licensed immigration attorney.

The Regulatory Language, Unpacked

Criterion 5 is defined at 8 C.F.R. § 204.5(h)(3)(v) as:

"Evidence of the alien's original scientific, scholarly, artistic, athletic, or business-related contributions of major significance in the field."

Read that sentence carefully. There are three separate elements:

1. Original — the contribution must be the petitioner's own work, not a refinement of someone else's.

2. Scientific, scholarly, artistic, athletic, or business-related — it must fit into one of these categories.

3. Of major significance in the field — and this is the hard part.

"Major significance" is the phrase that does all the work. It is also the phrase USCIS is most skeptical about, because almost every petitioner claims significance and almost nothing actually meets USCIS's threshold as articulated in their policy manual.

USCIS's internal guidance (reflected in policy manual language and AAO decisions) defines "major significance" as contributions that have had a demonstrated impact on the field beyond the petitioner's own work. That last phrase — beyond the petitioner's own work — is the key. It distinguishes original contributions that stay within the petitioner's research program (even if they're excellent) from original contributions that have been adopted, cited, or built upon by independent parties in ways that change how the field operates.

If you can't show impact beyond your own organization or research group, you are not meeting C5, regardless of how good your work is.

Why C5 Fails So Often

Four main reasons, based on pattern analysis of the Lumova dataset.

Reason 1: Petitioners confuse doing original work with having it adopted. Almost every EB-1A applicant has done genuinely original work — that's why they're applying. But doing the work and demonstrating that the work has been used by independent parties in a way that changed the field are two completely different things. Many first-draft petitions extensively document what the petitioner did without documenting what happened after. The adjudicator reads the section and sees impressive work with no downstream impact evidence, and writes the template RFE.

Reason 2: Expert letters substitute for adoption evidence. Petitioners pack their C5 section with expert declarations stating that the contributions are significant. USCIS is explicitly skeptical of this approach: their internal guidance notes that expert opinion, standing alone, does not establish significance. Significance must be demonstrated through evidence that independent parties have actually used the contribution, not through opinions that it is useful.

Reason 3: Adjectives replace numbers. "The petitioner's work has had a major impact on the field" is an assertion, not evidence. "The petitioner's methodology has been cited 412 times across 87 independent research groups and adopted as a standard technique by the National Cancer Institute's clinical trial framework" is evidence. The number of unquantified claims in a C5 section is almost perfectly correlated with RFE issuance in our dataset.

Reason 4: Self-citations inflate the record. Petitioners report citation counts without separating independent citations from citations by coauthors, collaborators, or former labmates. USCIS adjudicators are trained to discount self-citations, and when a petition doesn't separate them, the adjudicator assumes the worst. A petitioner with 412 total citations and 280 independent citations will do better by reporting both numbers explicitly than by reporting only the 412.

Meet Dr. Amara: The C5 Case Study

Dr. Amara Okonkwo is a Nigerian-born nephrologist and clinical researcher at Massachusetts General Hospital. 41 years old. Medical degree from University of Ibadan, fellowship at Johns Hopkins, 9 years at Mass General. Publications: 34 peer-reviewed papers, 3 first-author in top nephrology journals, 620 citations, h-index 15. She developed a novel biomarker assay for early-stage kidney injury that has been adopted by three major academic medical centers in their clinical protocols.

Dr. Amara's first-draft C5 section looked like this (paraphrased):

"Dr. Okonkwo has made original contributions of major significance in the field of nephrology. Her development of the novel kidney injury biomarker assay is a significant innovation that has been recognized by experts in the field. As described by Dr. [Expert A], 'Dr. Okonkwo's biomarker work represents a major advance in early-stage detection.' As described by Dr. [Expert B], 'The assay Dr. Okonkwo developed has significantly changed how we approach early kidney injury screening.' The petitioner's contributions are therefore of major significance."

Read that paragraph with adjudicator eyes. What does it establish? Expert opinions that the work is significant. What does it not establish? Whether anyone outside the petitioner's immediate network has actually adopted the assay, whether the adoption has changed outcomes, whether the citation record demonstrates independent use, or whether the work is substantive enough to move the field.

The audit rewrite looked very different:

"Dr. Okonkwo's original contribution is the development of the [specific biomarker] assay for early-stage acute kidney injury detection, first described in [citation] (Nature Medicine, 2020). Since publication, the assay has been:

- Adopted by three independent academic medical centers (Johns Hopkins, UCSF, Northwestern) in their clinical screening protocols, as documented by Exhibits 12-14, which include implementation letters from the department chairs of Nephrology at each institution.

- Cited 187 times across 43 unaffiliated research groups in the United States, Europe, and Asia (Exhibit 15 includes a complete independent citation list with author affiliation verification).

- Incorporated into the National Kidney Foundation's 2023 Clinical Practice Guidelines for early kidney injury detection (Exhibit 16).

- Licensed to [diagnostic company] for development into a commercial point-of-care assay, with royalty statements (Exhibit 17) demonstrating approximately $180,000 in licensing revenue in 2023.

Of the 620 total citations to Dr. Okonkwo's published work, 531 (85.6%) are from research groups with no coauthorship relationship to her (verified through methodology described in Exhibit 18).

The 'major significance' standard as articulated in USCIS Policy Manual Vol. 6, Part F, Ch. 2 requires a demonstrated impact beyond the petitioner's own work. The evidence above establishes impact through: (1) adoption at three independent academic medical centers, (2) incorporation into national clinical guidelines, (3) commercial licensing and deployment, and (4) an 85.6% independent citation rate. The contribution meets the 'major significance' standard under any reasonable reading of the regulation."

Notice what changed. Every adjective has been replaced with a number. Every claim has an exhibit citation. Expert letters (which Dr. Amara did still include, in a separate section) are no longer doing the heavy lifting — the heavy lifting is being done by independent adoption evidence. Self-citations are separated from independent citations. The regulatory standard is quoted explicitly and the evidence is connected to it.

This is what a strong C5 section looks like. It is not more expert letters. It is a specific, documented account of what happened after the original work was published.

Curious how your own petition scores?

Lumova reads your petition the way a USCIS adjudicator reads it — Kazarian two-step, per-criterion RFE risk, field percentile, readiness score. Ten minutes. No attorney fees.

Run a free audit preview

The Evidence Hierarchy for C5

Not all C5 evidence is equally valued. The Lumova dataset shows a clear hierarchy, from strongest to weakest:

Tier 1 — Strongest:

  • Incorporation into clinical guidelines, standards, or regulations issued by authoritative bodies
  • Licensing or commercialization with revenue documentation
  • Adoption as a standard technique or methodology in the field
  • Independent replication or validation by unaffiliated research groups

Tier 2 — Strong:

  • Citations by researchers in other countries or institutions with no coauthorship overlap
  • Press coverage in major professional publications (not mass media)
  • Invited keynote or plenary talks at premier field conferences
  • Named methodologies, techniques, or algorithms ("the Okonkwo assay")

Tier 3 — Supporting but not sufficient alone:

  • Citations within the petitioner's own research network
  • Expert letters from outside close-colleague relationships
  • Patents and pending patent applications
  • Implementation at the petitioner's own organization

Tier 4 — Weakest (may hurt more than help):

  • Expert letters from current collaborators or former supervisors
  • Self-citations
  • Press releases or marketing materials from the petitioner's employer
  • Speaking invitations at conferences where the petitioner is a co-organizer

A strong C5 section should lean heavily on Tier 1 and Tier 2 evidence. Tier 3 evidence is acceptable as supplementary. Tier 4 evidence should generally be excluded — it dilutes the credibility of the stronger evidence.

The Quantification Principle

Every sentence in your C5 section should contain at least one specific number, source, or date. If a sentence is purely qualitative, rewrite it. Examples:

Weak (adjective): "The petitioner's research has been influential in the field of computational biology."

Strong (quantified): "The petitioner's 2022 paper in Nature Communications on transformer-based protein folding has been cited 412 times in 31 months, including 14 citations from the Nobel Prize-winning DeepMind team's subsequent AlphaFold publications."

Weak (adjective): "The petitioner's contributions have been adopted by major research institutions."

Strong (quantified): "The petitioner's chromatin accessibility pipeline has been adopted as a standard analysis protocol by the ENCODE Consortium (13 member institutions), the NIH Roadmap Epigenomics Project, and the Broad Institute's Functional Genomics program, as documented in Exhibits 12-14."

The quantification principle is not about stuffing numbers into sentences for the sake of it. It is about making every claim verifiable. The adjudicator should be able to read any sentence in your C5 section and know exactly where in the exhibits to look for proof.

How to Build C5 Evidence If You're Starting From Scratch

If you're preparing to file and realize your C5 evidence is weak, here are the evidence-building activities that produce results:

1. Run a full citation audit. Use Google Scholar, Web of Science, and Scopus to compile a complete list of citations to your published work. For each citation, verify whether the citing author has any coauthorship relationship to you. Compute your total citation count and your independent citation count. Report both.

2. Identify adoption examples. Think about your work and ask: "Who has actually used what I built, and where did they use it?" For a software engineer: companies or projects that deployed your code. For a researcher: labs or clinical programs that integrated your methodology. For an artist: exhibitions, performances, or productions that featured your technique. Document each adoption with specific evidence.

3. Find independent citing authors and ask them for letters. The single most effective C5 strategy is to obtain independent expert letters from researchers who have already cited your work in their own publications. Start with Google Scholar's "cited by" function on your most-cited paper. Email 10-15 citing authors with a short, respectful message explaining what you need. Expect 25-40% to say yes.

4. Document commercial impact if relevant. If your work has been licensed, deployed, or used commercially, gather revenue documentation, licensing agreements, user counts, and any other quantitative evidence of commercial adoption.

5. Document policy or guideline integration. If your work has been cited by government agencies, professional societies, or standards bodies, gather the specific documents and highlight the citation. This is Tier 1 evidence and is disproportionately persuasive.

FAQ

Q: Can I win C5 with strong publication and citation evidence alone, without other downstream impact?

A: Sometimes — but only if your citation record is exceptional for your field. For a computational biologist with 1,500+ independent citations in high-Impact Factor journals, citations alone can meet the standard. For most petitioners, the combination of citations plus other downstream impact evidence is necessary.

Q: What if my field doesn't have strong citation practices (e.g., industry tech work)?

A: Focus on alternative impact evidence: deployed systems used by millions of users, open-source adoption metrics (downloads, GitHub stars with context, forks, dependent repositories), commercial revenue attribution, or patent citation records. The hierarchy still applies; the specific evidence forms just shift from academic to industry.

Q: Can an expert letter address "major significance" in a way that counts as evidence?

A: Expert letters supplement evidence; they do not replace it. An expert letter that explicitly cites specific independent adoptions of your work, with dates and institution names, is more persuasive than an expert letter that generically opines on significance. Opinions are Tier 4; specific facts from an expert are Tier 2.

Q: Does Lumova's audit specifically check C5 structure?

A: Yes. The audit flags the exact patterns described in this article — adjective-to-number ratio, self-citation separation, Tier 1 evidence presence, expert-letter-heavy reliance — and generates specific rewriting recommendations. Run your audit →


Remember: Lumova is educational — not legal advice.

Try Lumova free →

The Lumova Audit

See your RFE risks before USCIS does.

Upload your petition. In under ten minutes, Lumova returns a Kazarian two-step verdict, per-criterion RFE risk scoring, and a field percentile comparing your profile against 10,000+ real AAO decisions — the same patterns USCIS adjudicators are trained on.

Kazarian Step 1 (per-criterion) + Step 2 (final merits totality)
Per-criterion RFE likelihood with specific reasons
Field percentile against 10,000+ AAO decisions
Readiness score 0–100 + prioritized action items
Overall RFE likelihood range (e.g. 35–55%)
Language quality scoring with text excerpts

Lumova is educational, not legal advice. I am not an immigration attorney and no attorney-client relationship is created by using this platform. For individual legal advice, consult a licensed immigration attorney.