AAB-MLE · Healthcare
A regulator-grade proficiency testing platform for medical labs
Custom Software · Compliance Tooling · Web Platform
Case Study · 2003 – Present

The platform medical labs use to prove their results are right
The American Association of Bioanalysts Medical Laboratory Evaluations (AAB-MLE) runs proficiency testing for clinical laboratories — the recurring blind-sample evaluations that every CLIA-certified lab is required to pass to keep operating. We've been the engineering team behind that platform for more than two decades, building the software that grades thousands of lab results per cycle, holds up under regulator audit, and gets refined every few months as compliance rules and member needs change.
Most case studies talk about a website launch. This one is about a custom application carrying the operating weight of a healthcare-compliance program.
At a glance
- Client since: 2003+
- What they do: Proficiency testing (PT) for clinical and bioanalytical labs
- Annual cycles: M1, M2, M3, International, and Embryology/Andrology events
- Stack: Custom proficiency testing application + public site (aab-mle.org)
- Compliance surface: CLIA, COLA, NYS State ID, CAP, plus per-state auditor access
- Services delivered: Public site, custom platform engineering, grading system, PDF report uploader, e-signature, regulatory ID handling, demographic data ingest, audit trails, ongoing development
The challenge: testing labs at scale, with regulators watching
A proficiency testing program isn't a CMS workflow. Each event sends physical samples to participating labs across the country, collects their analytical results back, grades them against reference values and peer groups, and issues a regulator-grade report — with the right disclosures, the right state IDs, and the right e-signatures attached. The penalty for getting it wrong isn't a missed deadline. It's a CLIA finding.
The platform we maintain has to do all of that for five recurring event cycles a year, plus international participants on a separate import path, plus an off-cycle program for labs that miss a window. Every cycle adds new method codes, new instrument codes, new analyte rules, new participant counts, and new grading edge cases — none of which can break the ones already working.
A short list of the kind of work the platform actually has to support:
- Per-sample peer-group grading with manual override flags
- Bulk PDF reports with previewable upload, version history, and per-event clearing
- E-signature workflows with the ability to undo a signed analyte the lab no longer performs
- Demographic ingest from State Bar–style data files without creating duplicate accounts
- Auditor logins for state regulators (one identity, multi-state access)
- Flag reports collapsible by clinical number with CSV export for trend analysis
The solution: a platform that's been hardened by 20 years of real cycles
A grading system that holds up under audit
The grading engine handles the analyte logic that proficiency testing actually runs on — peer groups, reference ranges, manual fail flags, post-closure result-code edits for second organisms, micro-score calculations, statistical participant counts per sample. Reports show the right pass/fail flags in the right format, with descriptive method names instead of internal numeric codes.
Regulatory ID surfacing
CLIA, COLA, NYS State ID, CAP, and LAP numbers are first-class fields in demographic data and on every report — the identifiers regulators look for when they audit. Login and filtering for regulatory agencies (NYS, CAP, COLA) is gated by those identifiers, so an auditor sees what they're entitled to see and nothing more.
Bulk operations and version history
Labs can upload PDFs in bulk with preview, version history, and the ability to repost an off-cycle report against an order without losing the prior version. CSV export of flag reports preserves table structure for downstream analysis, including the year tag that lets a lab compare cycles across calendar years.
Operational tooling for the AAB office
A database table view for incoming lab results gives the AAB-MLE team a fast read on anomalies before grading runs. Drag-and-drop ordering, method-group snapshots, demographic-import error reporting, duplicate prevention, and editable evaluation dates are the kind of small, sharp tools that look unremarkable on a feature list and load-bearing in production.
A development cadence that matches the event cadence
Major changes ship into a tech-support testing environment a week before an event opens to participants, so issues are caught before live cycles. Every event cycle generates its own pull list of small fixes — naming conventions, format adjustments, audit-trail surfacing, two-factor authentication groundwork — and they go in between events without disrupting the calendar.
The outcome
AAB-MLE doesn't have a software vendor. They have a platform team — one that's been on the same codebase since the program's modern era began. Five event cycles a year, decades of historical participant data, regulator audits without findings, and a refinement loop tight enough that issues raised in one cycle are usually fixed before the next.
For the labs running on AAB-MLE, the platform is invisible — which is exactly what compliance software is supposed to be.
Let's work together
Ready to take your
business further?
Tell us about your project and let's create something extraordinary together.