Haemantheranos
Haemanthus, Theranos, and the Return of Diagnostic Optimism
There’s something deeply familiar about Haemanthus. A young company appears with a promise to revolutionize health diagnostics. It claims to use light and artificial intelligence to "read" biological fluids and detect disease—urine, sweat, saliva, blood, all through a small device. It raises funding quickly, operates in stealth, and says that the technology is fundamentally different from anything before it.
The catch is that we’ve heard this before. Not just in tone, but almost literally. The prototype looks like the Edison machine. The founder is Billy Evans, Elizabeth Holmes’s partner. And the claim—reading patterns from small bodily samples to uncover hidden illness—is a near-mirror of what Theranos pitched more than a decade ago.
That resemblance doesn’t automatically make Haemanthus fraudulent. But it should make us pause—especially because the burden of proof isn’t merely scientific. It’s historical.

What Exactly Is Being Promised?
Haemanthus claims to use Raman spectroscopy guided by artificial intelligence to detect patterns in fluids that would be invisible to current tests. The way it’s described in public statements is strikingly broad: thousands of biomarkers, immediate results, minimal invasiveness, all made possible through light-based sensing and AI-native devices.
On a surface level, Raman spectroscopy is a credible technique. It's used in chemistry and materials science to identify molecular structures by measuring how light scatters when it hits a sample. Each molecule scatters light in a slightly different way—this “fingerprint” can theoretically be used to identify what’s inside a biological fluid. That part is real.
But the translation from “in theory” to “in diagnostics” is where it gets complicated.
Raman signals are incredibly weak—only about one in a million photons actually undergo Raman scattering. To get anything usable, especially in complex fluids like blood or saliva, you need powerful lasers and extremely sensitive detectors. Biological samples also fluoresce under laser light, which creates background noise that often drowns out the signal. Extracting meaningful diagnostic information from that noisy data is a huge challenge even in tightly controlled lab settings. Doing it in a small, consumer-grade device is an entirely different level of difficulty.
This is where Haemanthus leans on AI. The idea seems to be that machine learning models can take in the noisy spectral data and “learn” to distinguish subtle biomarkers—patterns too complex for human interpretation. That, too, is possible in principle. But there’s a huge difference between training a model to recognize cancer markers in a dataset of 100 blood samples, and deploying it reliably in real-world clinics across different demographics, diets, comorbidities, and conditions.
The real issue is generalization. AI models trained on small, clean datasets often fail when faced with messy, real-world data. Signal drift, batch variation, hydration levels, skin pigmentation, even what someone ate the night before—these all change the spectral signature. And unless those factors are carefully modeled and tested, the AI will fail silently, returning confident but incorrect predictions. That’s not just a technical problem. It’s an ethical one.
So far, Haemanthus has shared no peer-reviewed validation, no published architecture for its AI models, no details on training data, and no formal studies showing how their system performs outside the lab. Without that, the claim that their device can see “what current tests can’t” is just that—a claim.
The Language Problem
There’s a reason this sounds familiar. The rhetoric around Haemanthus is almost algorithmically similar to Theranos: “revolutionary,” “complete molecular story,” “first of its kind.” These are not scientific terms. They’re narrative devices—crafted to attract investment, simplify complexity, and create a sense of inevitability. Even the phrase “AI-native sensor” feels engineered more for its emotional resonance than for technical specificity.
What does “AI-native” mean, exactly? Is the sensor embedded with onboard machine learning capabilities? Is it just a pipeline that runs spectral data through an external model? Does the AI operate at the hardware level (e.g., noise correction on-chip) or only post-analysis? These distinctions matter, but they’re elided in favor of sleek branding.
This isn’t to accuse Haemanthus of deceit. The deeper concern is about epistemic responsibility. In health tech, especially diagnostics, language needs to be descriptive, not evocative. If we call something “the future of health,” we need to know what it replaces, and how. If it claims to do better than existing lab tests, we need the comparison: sensitivity, specificity, false positives, false negatives. Otherwise, we’re not communicating science. We’re performing belief.
Why This Isn’t Just About One Company
The temptation, when facing companies like Haemanthus, is to either lionize or dismiss. But that dichotomy misses the broader issue: the venture ecosystem that continues to fund high-credence claims without high-resolution evidence.
It’s easy to forget that Theranos was not just a failure of one founder. It was a structural failure—a system in which media attention, investor excitement, and institutional trust were all activated long before there was anything real to test. Haemanthus, fairly or not, walks into a space still defined by that memory. And when it echoes the same secrecy, uses the same language, and asks for the same early belief, it invites the same critique.
Skepticism here is not cynicism. It’s a moral position. When the stakes involve medical interpretation—what diseases someone has, whether a symptom is benign or dangerous—precision is not optional. Hype is not harmless.
In the End
There may be real science behind Haemanthus. Maybe they are working through the Raman signal-to-noise problem in new ways. Maybe their AI models have been trained on massive datasets we haven’t seen. Maybe the resemblance to Theranos is mostly aesthetic. All of that is possible.
But the responsibility of proof lies with them. Not in one-off statements, but in verifiable studies, open methodology, and a willingness to be wrong in public. Until that happens, belief in the company requires exactly what science is designed to avoid: trust without evidence.
And if that’s what it takes to make progress in diagnostics today, then perhaps the real crisis isn’t technological. It’s philosophical.


