Every piece is sourced, cited, and fact-checked against the primary literature before it gets a slug. We publish what the studies actually say, including the uncomfortable parts.
Filter to Evidence Reviews for peer-reviewed breakdown. Every piece shows the study tier, sample size, and whether the finding was replicated.
Click the Sleep topic tag. We cover delta entrainment, glymphatic clearance, and why most sleep apps measure the wrong variable.
Filter to HRV or Performance topics. These pieces explain what RMSSD and SDNN actually measure and how to use them as decision inputs before training.
Filter to Evidence Reviews and sort by evidence tier. We do not publish studies we cannot access in full. Preclinical findings are labeled. Wellness myths are named.
Subscribe above. One peer-reviewed breakdown per week. No promotional content. The journal is our open research record, not a content funnel.
No promotional junk. One research explainer per week, and nothing else.
Not because we cherry-pick from the same sources. Because these are the studies with the largest effect sizes, largest sample sizes, and highest replication rates in the frequency literature. Any serious review eventually cites them.
All citations above link to PubMed abstracts or open-access PDFs inside each article. We do not cite studies we cannot access in full text.
Wellness journalism is written to be convincing. A Rock Bird review is written to help you evaluate something. The reading posture for both is different. Here is the specific difference.
Wellness pieces are written to confirm a conclusion. The body supports the headline. The lede is the takeaway. You are being led to a position.
Most wellness articles cite studies. Most do not tell you the sample size, the control condition, or whether the finding has been replicated. The citation exists to add credibility, not to transmit information.
If limitations appear at all, they appear in the final paragraph, framed as caveats that do not undermine the headline conclusion. Usually they do undermine it.
Every Rock Bird review opens with the evidence tier. Read this before reading anything else. Tier 1 and Tier 2 findings deserve your full attention. Tier 3 and Tier 4 deserve your calibrated skepticism.
The limitations section in a Rock Bird review is written to be read, not scanned. It tells you the specific scope of the claim. The conclusion only makes sense if you have read the limitations first.
We report n, control condition, and whether the outcome was blinded for every primary study cited. These three numbers tell you most of what you need to know about how much confidence the finding deserves.
A wellness article wants you to feel informed. A Rock Bird review wants you to be able to act correctly, even if that means concluding the evidence is not yet sufficient to act on at all. If you finish a Rock Bird review and the honest answer is "the data is weak, wait for replication," that is a successful review. That outcome almost never appears in wellness journalism.
Most health and wellness publications cover studies. Rock Bird covers what the studies actually say, including what they do not say, which is almost always the more important information.
When a wellness headline says "scientists found that..." it virtually never tells you whether the study involved 12 participants or 12,000. This number determines whether the finding is preliminary signal or established science. Rock Bird reports the exact n for every primary study cited in a review. This one number changes how you should weigh the evidence.
A single positive RCT is a hypothesis, not a conclusion. Every review in the Rock Bird archive includes a "replication status" note: how many independent labs have produced consistent results, and in which direction they diverged. Findings that have not been replicated are marked explicitly as such. Unreplicated results are common in the binaural beat literature and we say so.
Industry-funded studies are more likely to report positive findings than independently funded studies. This is documented across medical and nutritional research. When a study on binaural beats was funded by a binaural beat company, or a TM study was funded by the Maharishi Foundation, Rock Bird notes this in the citation block. The finding is still included; the conflict of interest is disclosed.
Most binaural beat research was done in clinical populations (anxiety disorder patients, insomnia patients) or laboratory settings with long continuous exposure (60-90 minutes). Mortis users are healthy adults using 15-25 minute sessions in their daily life. Whether a clinical finding generalizes to this population is not obvious. Rock Bird explicitly evaluates this and flags when a study population differs meaningfully from the Mortis user base.
Rock Bird is the research branch of Mortis. Every piece cites the primary source. If we cannot access the full study, we do not summarize it. If a claim has no replicated human RCT, it is labeled accordingly.
Every evidence review starts with the paper, not the press release. Sample size, methodology, effect size, limitations. We do not cherry-pick findings. If the study shows a modest effect, we report it as modest.
Why 40Hz activates microglia. Why coherent breathing at 5.5 breaths per minute maximizes RMSSD. Why 2Hz delta correlates with glymphatic clearance. Mechanism-first, no abstraction left unexplained.
We audit claims in the wellness industry against the primary literature. Crystal frequency, alkaline water, detox protocols. Some are nonsense. A few have partial support. We label each clearly and cite the verdict.
Citations per HzAggregate HRV deltas from users running the same protocols. Which frequencies moved the most people. Which breathwork patterns are most consistent. Field-observed n grows faster than any single study.
Your matrixAnecdote presented as evidence. Preclinical findings extrapolated to human outcomes without caveat. Claims from supplement companies. Sponsored research with undisclosed conflicts. If it cannot be cited, it does not go in the journal.
Academic journals and Rock Bird both cover research. The similarities end there. Here is what is different, why we chose to be different, and what it costs us.
What Rock Bird costs: our review is not peer review. We are not methodologists. We can misread a statistical technique or miss a key limitation. We correct these when found. Every article has a correction policy and a version history. We are more accessible than a journal. We are also more fallible. Know the difference.
Every review follows the same format. Here is what each section tells you and how to read it efficiently if you want the verdict fast.
Every review opens with a one-sentence verdict: what the evidence says, at what confidence level. "Tier 2 evidence supports: 40Hz gamma entrainment increases EEG power in the gamma band. Human cognitive translation remains under investigation." Read this first. The rest is the evidence for it.
Each cited study is broken down into four fields: sample size, study design, what was measured, and the effect reported. We include the journal and year so you can find the primary source. Every DOI or PubMed link is clickable. If the study is behind a paywall, we say so and link to the abstract.
This is the part most wellness publications skip. Every review has a limitations section that names the specific weaknesses of the evidence: small sample, no control group, industry-funded, animal model only, single lab with no replication. If the limitations are severe enough to undermine the verdict, we say so explicitly and lower the tier label.
A review with a Tier 1 verdict and strong limitations is more honest than a review with a Tier 1 verdict and no limitations section at all. We believe the limitations section is the most important part. Read it before you share.
Full science overviewA research publication is defined as much by what it refuses to run as by what it prints. The wellness content category has a specific set of articles that drive traffic and compromise credibility. Rock Bird will not publish any of the six categories below. The editorial restraint is the value.
Rock Bird will not turn one newly-published paper into a "new study shows" headline. A single result, especially from a small lab with no replication, is a hypothesis. We cover single-study findings only in the context of a broader evidence review, and only with the replication status labeled explicitly. The alternative is content that ages badly within six months.
"The 7 supplements a longevity expert takes every morning" is a reliable traffic driver and a category Rock Bird will not touch. The format inflates signal into prescription and stacks claims that do not interact in the body the way the listicle implies. If a specific intervention has a controlled study, we cover the study; we do not generate ordered lists.
Rock Bird will not profile a wellness personality's morning routine, sleep stack, or meditation practice as if it were evidence. Individual routines are anecdote. We cover the mechanism, not the performer. The exception is original data from a named researcher whose published work is being reviewed, clearly labeled as the researcher's position rather than consensus.
A bioRxiv or medRxiv pre-print has not been peer-reviewed. Rock Bird will cover pre-prints when the underlying work is relevant, but only with the pre-print status displayed in the verdict line and the methods scored as if it had entered formal review. The rest of the wellness press cites pre-prints as if they were journal articles. We do not.
Rock Bird will not publish head-to-head reviews of other meditation or HRV apps. The conflict of interest is obvious and the piece cannot be honest when Mortis is the publisher. Where a comparison is unavoidable (pricing, feature sets), we describe the structural differences factually and link to the other product's own website for their claims.
If Rock Bird would not let another company make a claim based on the data in front of us, we will not make it about Mortis either. This is the hardest rule to keep because a publication owned by a product company has every incentive to let internal claims slide. The editorial integrity of Rock Bird depends on applying the same tier system to the parent company. When we cannot, we flag the piece as marketing and move it out of Rock Bird entirely.
Restraint is not a marketing pose. It is the condition that makes Rock Bird useful. If you are reading a publication that runs every category above, you are reading a content farm with a research aesthetic. The six red lines are the minimum threshold for a research publication worth returning to, and they are rarer than the category suggests.
One review per week. Every review goes through the same four-stage process before it publishes. Here is what that looks like and why each stage exists.
PubMed, Google Scholar, and Cochrane are the starting points. We pull every human study, then every animal study, then conference abstracts. Each is graded using the Mortis tier system (RCT = Tier 1, controlled = Tier 2, observational = Tier 3, preliminary = Tier 4). Studies are not excluded for producing negative results. A well-powered null finding is Tier 1 evidence that something does not work.
Rock Bird drafts the verdict based on the highest-tier evidence available, then runs an adversarial review: what would a skeptical methodologist say about this conclusion? Sample sizes, funding sources, replication status, effect sizes, and generalizability to our user population (adults, waking hours, short sessions) are all checked. The limitations section is written last, and it is required to name at least one specific weakness of the strongest evidence.
If the evidence supports a claim, the review links to the specific frequency, breathwork technique, or practice format in the app that operationalizes it. If the evidence contradicts something we publish, we flag it in the review and update the relevant matrix entry. The journal is not a standalone publication. It is the live research layer behind the product.
Reviews publish Monday. If a new study changes the evidence picture after publication, the review is updated with a dated note at the top explaining what changed and why the tier or verdict was revised. No review is silently edited. The publication date and last-revised date are both displayed. This is what a science-first publication looks like: corrections are visible, not buried.
Every correction is published as a dated note at the top of the affected review. We do not delete claims silently. If the evidence changed our view, we say what changed and when.
Readers can submit topics via the app. The editorial calendar is public. Studies sent in by users are read. If they change a verdict, the user who sent them is credited in the revision note.
Rock Bird updates evidence tiers when the literature justifies it. When a tier changes, here is how that information flows into your personal protocol.
A Rock Bird review is not an editorial opinion. It is a structured evaluation that can change the evidence tier of a frequency in the library. Here is what happens between the day a new paper comes out and the day your recommendation engine starts weighting that paper in your protocol.
Weekly automated queries against PubMed, bioRxiv, and SSRN for the specific keyword set of the six bands, HRV, and related autonomic measures. Roughly 40 papers per week surface in this first cut.
Most are excluded at this stage for wrong species, wrong methodology, or tangential relevance. Typical pass-through rate: 8 to 12 papers per week.
Rock Bird reads the methods section and scores the paper on five axes: study design, N, control condition, pre-registration status, and measurement validity. Papers scoring below a threshold are filed under "exists but not citable" and noted in the monthly digest without further review.
Scoring is visible on the methods score card attached to every published review. We do not hide our rubric.
Papers that pass the methods screen get a full-text read. The reviewer extracts the primary effect size, the confidence interval, and the limitations the authors themselves flag. The review draft includes a one-paragraph plain-language summary that has to survive the "would this survive a clinician's scrutiny" test.
Turnaround from stage 2 to stage 3: typically 5 to 9 days.
If the paper claims an effect that we can cross-check against the Mortis community HRV corpus, we run the query. Does the effect the paper describes match what our users actually show? This is a step academic journals cannot do. Our data corpus produces a sanity check before the paper changes anything in the product.
Four of the last twelve reviews reached stage 4. Two of the four showed population-community agreement; two did not and received a downgraded weight in the product.
If the review and the community cross-check both support a tier change, the frequency entry in the matrix is updated. The next time you open the app, your recommendation engine uses the new tier in its weighting. A change log note appears on the frequency page. The Rock Bird review is published with the tier change explicitly noted.
Average end-to-end time from new paper to product update: 3 to 6 weeks. Tier changes have been made to 4 frequencies in the matrix since the beta began.
The value of this pipeline, and the reason it justifies reading Rock Bird rather than a wellness blog, is that every review has a decision attached. The review is not "here is what I thought about the paper." It is "here is how this paper changed, or did not change, the product." That constraint is what separates an editorial opinion from an evidence-based product update.