Australia's Beauty Authority · April 2026 Sign in Premium Newsletter
Vol. 01 · Issue 04 Glow. Australia · Est. 2014
We earn commission on some products we recommend. Commission never influences rankings. Read our standards.
Editorial standards

How we review.

The methodology behind every Glow ranking. Documented testing protocols, a published scoring rubric, declared conflicts of interest, and the editorial wall that sits between our commerce and our content. This page is the audit trail. Read it carefully if you intend to rely on what we publish.

01Product sourcing.

Every product reviewed by Glow is acquired in one of three ways. We disclose which one for every individual review.

Editor-purchased

The editor walks into Adore Beauty, Mecca, Sephora AU, Priceline, or the brand's own retail location and pays for the product at full retail. This is our default and accounts for 71% of products tested in 2025.

Brand-supplied for review

The brand sends the product directly to our editorial office, having been told the product will be reviewed objectively and may receive a low score. Accepting product does not obligate publication; if the product fails our scoring threshold, the review either does not run or runs as a public negative review with the supplier credit declared. 24% of products tested in 2025 were supplied this way.

Retailer loan

Adore Beauty or Mecca occasionally loans premium devices ($400+) for editorial testing, returning them after the test period. We declare the loan in the review. Loans cover 5% of products tested in 2025 and are exclusively for hardware categories where the unit cost makes editor-purchase impractical at our review volume.

We do not accept paid product placement in rankings. We do not review products we have not held in our hands. We do not publish "round-ups" assembled from PR reach-outs without testing.

02Testing protocol.

Each category has a documented testing protocol that the assigned editor follows. The protocol is shared with the editor before testing begins and is referenced in the published review. Below is the abbreviated protocol for two categories. Full protocols are available on request.

Self-tan testing protocol (abbreviated)

  1. Test on three skin tones across three editors. Record skin tone on the Fitzpatrick scale before application.
  2. Apply at consistent humidity (45–55% RH) and ambient temperature (20–22°C). We record both for every test.
  3. Photograph the application at 30 minutes, 2 hours, 4 hours, 8 hours, 24 hours under standardised lighting.
  4. Independently rate streak visibility, palm/wrist transfer, fade pattern, and odour residue at 24 and 72 hours.
  5. Reapply once at the manufacturer's recommended interval. Test fade evenness across two complete development cycles.
  6. Aggregate scores from three testers. Weighted by category importance (see scoring rubric).

Skincare active testing protocol (abbreviated)

  1. Eight-week minimum test period for any retinol, vitamin C, AHA/BHA, or actives-led serum. Six weeks for moisturisers, cleansers, hydrators.
  2. Patch-test on the inner forearm for 48 hours before facial application. Document any reaction.
  3. Editor maintains a daily log for the test period: usage frequency, layering protocol, observed effects, irritation events.
  4. Photographs taken at week 0, week 4, week 8 under controlled lighting (5500K daylight) and identical pose.
  5. Cross-test with at least one alternative formulation in the same active class to anchor the comparison.
  6. Disclose any concurrent skincare changes that could confound the result.

If a product is reviewed without following the relevant protocol, the review is marked "first-impression" and excluded from rankings.

03Scoring rubric.

Each product is scored against a five-axis rubric with category-weighted multipliers. The composite score, expressed out of 10, is what appears in our rankings. The five axes are consistent across categories. The weighting differs.

Axis Weight (skincare) What it measures
Performance 35% Did the product produce the result it was sold to produce, in the timeframe claimed, on at least two of three test subjects?
Tolerability 25% Side effects, irritation, sensitivity, fragrance load, ingredient transparency, percentage of test subjects who experienced an adverse reaction.
Texture & application 15% Spreadability, layerability with other products, sensorial experience, packaging hygiene, dosing precision.
Value 15% Price per outcome, price per ml/g of active ingredient, refill availability, durability of result.
Repurchase intent 10% Of the editors who tested it, how many would buy it again with their own money? This is the closest proxy to honest endorsement.

Weights vary by category. For devices, "performance" rises to 45% and "value" to 25%. For wellness supplements, an additional sixth axis — clinical evidence quality — is included at 20%. The full per-category weighting matrix is published as an appendix to each ranking page.

Scores below 6.5 are not published as rankings. They appear instead in our brand-review pages and in the Confessions column.

04Who tests.

Every product is tested by a named, credentialled editor whose category authority is published on our editors page. We do not run anonymous reviews. We do not run reviews by freelancers without editorial credit. We do not buy reviews from external testing services.

Our category leads as of April 2026:

Each editor's by-line appears on every product they have tested. Each editor is contactable by email through the masthead. Each editor's professional background and any disclosable industry ties are published on the editors page.

05Editorial wall.

Glow runs an internal separation between our commerce team (the people who manage affiliate relationships, sponsorship, and brand-direct revenue) and our editorial team (the people who score and write reviews). The wall is operationally enforced.

The commerce team does not see review scores before publication. The commerce team does not negotiate retroactive commission rates based on coverage. Commerce-team requests for editorial coverage of specific products are categorically refused.

Editors are not informed of which products carry an affiliate commission and which do not. We use a unified outbound link router (/out/?r=&p=) that resolves to the relevant retailer at click-time, so the editor writing the ranking does not see the underlying commercial relationship.

Brand-supplied product is logged in the same database as editor-purchased product. Reviews are written from the same template. The supplier source appears in a small disclosure block at the foot of each review and is not visually emphasised.

06Commerce policy.

Glow earns revenue from three sources, in this order:

We do not accept paid placement in rankings. We do not accept "review priority" payments. We do not accept payment to remove negative reviews. We do not accept commission rate increases tied to coverage volume.

07Corrections and updates.

We publish corrections in line. If a factual claim in a review is wrong, we edit the review and add a dated correction note at the foot. We do not silently rewrite history.

Reviews are formally re-tested every 18 months at minimum. Reformulations, new variants, or significant retail price changes trigger an interim re-test. The "Last audited" date appears on every ranking page.

Brands may request a re-test if they believe a published review is materially inaccurate. We honour reasonable requests. We do not honour requests to soften critical language unless a factual error is identified.

08Annual audit.

Once per calendar year, an independent third-party reviews 10% of our published rankings against our documented methodology. The audit confirms that the testing protocol was followed, that the scoring rubric was applied consistently, and that any commerce relationships were disclosed appropriately.

The 2025 audit was conducted by an independent media accountability auditor based in Sydney. The summary report is available to media partners and prospective acquirers on request.


"Most beauty publications publish their methodology because regulators ask them to. We publish ours because we want our readers to know exactly what to trust about a Glow ranking — and exactly where the human judgement begins." — Jackson Morice, Founder & Editor-in-Chief

Last reviewed by the editorial board: April 2026. Next scheduled review: October 2026.

Questions about a specific review or a methodology concern can be sent to [email protected]. We respond to every editorial integrity inquiry within five working days.