Key Strengths
- Clear, jargon-free vision statement (HQ1) — "What if every healthcare billing team..." framing is accessible and compelling
- Strong "Why Now" with 8 market timing factors including technology convergence, regulatory mosaic, and AI adoption paradox
- Explicit "Why ARPA-H" argument — commercial incentive misalignment is well-articulated
- Go/No-Go gate at Month 4 with specific criteria — shows program management discipline
- Three Technical Areas (TA1/TA2/TA3) with clear scope boundaries and milestone mapping
- 10 risk rows including transition, programmatic, reputational, and regulatory — comprehensive risk awareness
- Federal keyword screen is clean — "parity" replaced with "nondiscrimination" even in MHPAEA context
- IP Table, BOE, and Payment Milestone Schedule now included per UMich template
- Quantified 3-5x improvement metric and 40pt overturn variance with methodology (n=47, propensity matched, κ=0.84)
- Broad Access expanded with safety-net provider commitments (FQHCs, Critical Access, rural); HQ10 safeguards in place
- TA dependency flow documented: TA1→TA2→TA3 with closed-loop feedback
- Partner selection strategy documented with target ICP profile (50+ bed BH provider, PHP/IOP/SUD, 835 data)
Key Weaknesses & Gaps
- Partner organization is still TBD — "[Partner Healthcare Organization - TBD during contracting]" weakens credibility
- All sub-awardee POCs are TBD — no named individuals beyond Stratum's team
- Statistical power discussion is Phase II deferred — reviewers may question Phase I sample size (60-80 outcomes)
- Phase I budget ($441K) is significantly below ARPA-H norms ($1M-$109M range, avg $26M) — needs increase
- No letters of support or intent referenced
- No explicit team qualifications section — relies on cover page POCs only
- Cost Sharing line shows "Partner In-Kind" but partner is TBD
▶ Heilmeier Questions (HQ1–HQ10)
88 / 100Evaluated against ARPA-H's "Hidden Questions Behind the Heilmeier Questions" — 10 questions with sub-questions and considerations that program managers use to evaluate proposals.
HQ1: What are you trying to do?
HQ2: How is it done today, and what are the limits?
HQ3: Why now? Why ARPA-H?
HQ4: What's new in your approach?
HQ5: Who cares? What risks?
HQ6: Deliverables & Timeline
HQ7: Cost & Budget
HQ8: Midterm Check
HQ9: Broad Access & Affordability
HQ10: Misperception Safeguards
▶ Technical Innovation & Differentiation
90 / 100Novelty of Core Concept
Technical Architecture Clarity
Quantitative Claims
Competitive Moat Argument
Engineering & Scientific Basis
▶ UMich Template & ARPA-H Format Compliance
82 / 100Evaluated against University of Michigan ARPA-H resource collection: Solution Summary Template, Budget Development Resources, Payment Milestone Schedule, IP Table Template, Task Description Document.
Sub-Awardee Completeness
Page Length Concern
BOE Format
▶ Federal Keyword Screening
95 / 100Scanned against NSF 68-word list, HHS/Head Start ~200-word list, and PEN America 350+ word tracking list. V10 replacements (barriers→challenges, systemic→structural, unbiased→independent, institutional memory→operational memory, tribal knowledge→undocumented knowledge) are confirmed in V11.
| Category | Status | Detail |
|---|---|---|
| DEI Terminology | CLEAR | No instances of: equity, diversity, inclusion, belonging, marginalized, underserved, vulnerable, minority |
| Gender/Race Terms | CLEAR | No instances of: gender, female, women, racial, racism. "Race" appears only as substring in "traceability" — false positive |
| Systemic/Structural | CLEAR | "Systemic" replaced with "structural" in V10. V11 uses "structural" consistently |
| Barriers | CLEAR | "Barriers" replaced with "challenges" in V10. V11 uses "challenges" consistently |
| Bias/Unbiased | CLEAR | "Unbiased" replaced with "independent" in V10. No instances remain |
| Tribal/Indigenous | CLEAR | "Tribal knowledge" replaced with "undocumented knowledge" in V10 |
| Trauma | CLEAR | No instances of: trauma, trauma-informed, ACE, adverse childhood |
| Disability | CLEAR | No instances |
| Disparity/Inequity | CLEAR | No instances |
| Behavioral Health Context | CLEAR | "Behavioral health" used throughout — essential domain term. "Parity" replaced with "nondiscrimination" even in MHPAEA regulatory context. Clean. |
Overall Keyword Risk
▶ Proxy Awardee Pattern Alignment
78 / 100Evaluated against patterns extracted from 6 ARPA-H proxy awardees: PRECISE-AI, DIGIHEALS, ADVOCATE, MARCUS, Every Cure, and Duality Technologies.
Patterns V11 Adopts Well
Phased technical approach — All successful awardees use progressive complexity scaling. V11's Phase I→II→III progression mirrors this.
Public-interest framing — Exportable schemas, open standards, broad access. Matches ARPA-H's emphasis on health impact beyond the performer.
Patterns V11 Could Strengthen
Team credentials section — Proxy awardees highlight PI/team qualifications prominently. V11 has no dedicated team qualifications section.
Preliminary data specificity — Awardees with "preliminary data" claims typically cite specific studies or pilot results with methodology. V11's "preliminary operational data" is vague.
▶ Risk Coverage & Resilience Design
86 / 100Go/No-Go Gate Design
Resilience Framing
Regulatory/Political Risk
▶ Priority Actions for V12
Action Items| # | Action | Impact | Status |
|---|---|---|---|
| ✓ | Done | Done | |
| 2 | Name at least one clinical partner — even a letter of intent removes the biggest remaining credibility gap. Target ICP now documented. | Critical | External |
| 3 | Add team qualifications paragraph — PI experience, domain expertise, technical capabilities. 3-4 sentences. | High | Pending |
| 4 | Regenerate DOCX and verify page count — new content added; confirm 6-page compliance. | Medium | Pending |
| ✓ | Done | Done | |
| ✓ | Done | Done | |
| ✓ | Done | Done | |
| ✓ | Done | Done | |
| ✓ | Done | Done | |
| ✓ | Done | Done | |
| ✓ | Done | Done |