Pure Global
Back to Glossary
🏷️

AI/ML Medical Device

Device Classification
🌍 Global
Updated 2025-12-26
Quick Definition

AI/ML Medical Device is 인공 지능 또는 머신 러닝 알고리즘을 통합하여 데이터를 분석하고, 임상 결정을 지원하거나 진단 기능을 수행하는 의료기기.

Pure Global
DJ Fang

DJ Fang

MedTech Regulatory Expert

Need help with 30+ markets registration?

Pricing

Complete Guide to AI/ML Medical Device

AI/ML (Artificial Intelligence/Machine Learning) medical devices use computational algorithms that can learn from data, identify patterns, and make predictions or recommendations to support healthcare delivery. These devices represent a rapidly growing category that presents unique regulatory challenges due to their ability to evolve and adapt over time.

Types of AI/ML medical devices:

Locked vs. Adaptive Algorithms:

Locked Algorithms:
- Algorithm is fixed after initial training and validation
- Does not change or learn from new data during clinical use
- Traditional regulatory pathways apply more straightforwardly
- Easier to validate and demonstrate consistent performance
- Most currently cleared/approved AI/ML devices use locked algorithms

Adaptive Algorithms:
- Algorithm continues to learn and modify its behavior based on new data
- May improve performance or adapt to new patient populations over time
- Requires novel regulatory approaches to ensure continued safety and effectiveness
- FDA's PCCP (Predetermined Change Control Plan) framework addresses this category
- Presents challenges for validation, version control, and post-market surveillance

Common clinical applications:

Computer-Aided Detection (CADe):
- Identifies and marks potential abnormalities for clinician review
- Examples: mammography screening, lung nodule detection, diabetic retinopathy screening
- Typically serves as a "second reader" or alerting system
- Does not replace clinician judgment

Computer-Aided Diagnosis (CADx):
- Provides diagnostic characterization or classification of findings
- Examples: skin lesion classification, cardiac arrhythmia detection, pathology image analysis
- May assign probability scores or risk categories
- Clinician retains final diagnostic responsibility

Clinical Decision Support:
- Recommends treatment options or clinical pathways
- Predicts patient outcomes or disease progression
- Risk stratification and patient triage
- Medication dosing optimization

Image Reconstruction and Enhancement:
- Improves medical image quality
- Reduces radiation dose while maintaining diagnostic quality
- Accelerates imaging acquisition (e.g., MRI scan time reduction)

Predictive Analytics:
- Early warning systems for patient deterioration
- Sepsis prediction algorithms
- Readmission risk prediction
- Treatment response prediction

FDA regulatory framework for AI/ML devices:

Current 510(k) pathway:
Most AI/ML SaMD (Software as a Medical Device) currently reaches market through 510(k) clearance:
- Requires substantial equivalence to predicate device
- Algorithm must be "locked" at time of clearance
- Changes to algorithm require new 510(k) submission
- Validation data must demonstrate safety and effectiveness

De Novo pathway:
For novel AI/ML devices without appropriate predicates:
- Establishes new device classification
- Creates pathway for future similar devices
- Examples: IDx-DR (autonomous diabetic retinopathy detection)

PMA pathway:
For high-risk AI/ML devices:
- Class III devices requiring premarket approval
- Most rigorous review with clinical trial data
- Rare for AI/ML SaMD but used for some high-risk applications

Predetermined Change Control Plan (PCCP):
FDA's approach to adaptive AI/ML algorithms:
- Manufacturer specifies anticipated algorithm modifications in advance
- Defines types, methods, and extent of expected changes
- Establishes validation protocols for future modifications
- Allows certain pre-specified changes without new submission
- PCCP approved as part of initial marketing authorization
- Sometimes called "ML-based SaMD pre-specifications" or "SPS (Software Pre-Specifications)"

Key validation requirements:

Training Data:
- Representative of intended use population
- Sufficient quantity and diversity
- Well-curated and accurately labeled
- Addresses potential biases in race, age, gender, ethnicity
- Documentation of data sources and selection criteria

Algorithm Transparency:
- Description of model architecture and training methodology
- Feature importance and decision-making process
- Handling of edge cases and uncertain predictions
- Known limitations and failure modes

Clinical Validation:
- Performance on independent test dataset
- Comparison to reference standard (e.g., expert clinician interpretation)
- Metrics: sensitivity, specificity, AUC, positive/negative predictive value
- Subgroup analysis to identify performance variations
- Prospective clinical studies may be required

Usability and Human Factors:
- How AI outputs are presented to clinicians
- Risk of automation bias (over-reliance on AI recommendations)
- Fail-safe mechanisms and error handling
- Training requirements for users

EU AI Act implications:

The EU Artificial Intelligence Act (AI Act) creates additional regulatory requirements for AI medical devices:

Risk-based classification:
- Medical device AI typically falls into "high-risk" AI systems category
- Subject to additional requirements beyond MDR/IVDR

Additional obligations:
- High-quality training datasets with risk management for biases
- Technical documentation and automatic logging
- Transparency and provision of information to users
- Human oversight requirements
- Robustness, accuracy, and cybersecurity measures
- Conformity assessment procedures

Interaction with MDR/IVDR:
- AI medical devices must comply with both AI Act and MDR/IVDR
- Harmonized conformity assessment when possible
- Notified bodies must consider AI Act requirements

Examples of FDA-cleared/approved AI/ML devices:

IDx-DR (De Novo 2018):
- Autonomous AI system for diabetic retinopathy detection
- First FDA-authorized AI-based diagnostic system that provides screening decision without clinician interpretation
- Uses retinal images to detect diabetic retinopathy

Viz.AI ContaCT (510(k) 2018):
- AI algorithm for stroke detection from CT scans
- Alerts stroke team when large vessel occlusion detected
- Reduces time to treatment

Paige Prostate (PMA 2021):
- First AI-based device to aid pathologists in cancer detection
- Flags suspicious areas in digitized prostate biopsy slides
- Assists pathologist in identifying cancer

Arterys Cardio AI (510(k) 2017):
- Cloud-based AI for cardiac MRI analysis
- Automated ventricular segmentation and measurement
- First FDA-cleared cloud-based deep learning application

Challenges and considerations:

Algorithm Bias:
- Training data may not represent diverse patient populations
- Performance may vary across demographic groups
- Risk of perpetuating or amplifying existing healthcare disparities
- Requires careful validation across subpopulations

Black Box Problem:
- Deep learning models may lack interpretability
- Difficult to explain why algorithm made specific recommendation
- Regulatory trend toward requiring explainability and transparency
- Balance between accuracy and interpretability

Data Privacy and Security:
- Large datasets required for training raise privacy concerns
- HIPAA, GDPR, and other privacy regulations apply
- Cybersecurity risks from cloud-based or networked systems
- Data de-identification and anonymization requirements

Post-Market Performance Monitoring:
- Algorithm performance may change over time (data drift)
- Patient populations may evolve
- Integration with different healthcare IT systems
- Need for ongoing surveillance and performance tracking

Validation Complexity:
- Difficult to test all possible scenarios
- Edge cases and rare conditions challenging to validate
- Continuous learning algorithms require novel validation approaches
- Ensuring generalizability beyond training environment

Liability and Responsibility:
- Questions about responsibility when AI contributes to medical error
- Shared responsibility between manufacturer, clinician, and healthcare organization
- Need for clear labeling of intended use and limitations
- Importance of clinical oversight and human-in-the-loop design

Future directions:

FDA's AI Action Plan:
- Developing regulatory framework for adaptive algorithms
- Expanding use of PCCP approach
- Creating Good Machine Learning Practice (GMLP) guidelines
- Enhancing post-market monitoring capabilities
- International harmonization efforts

Real-World Performance Monitoring:
- Shift toward continuous monitoring in clinical use
- Real-world evidence generation
- Adaptive regulation based on post-market data
- Digital health software precertification pilot program

International Harmonization:
- IMDRF (International Medical Device Regulators Forum) work on AI/ML
- Alignment between FDA, EU, and other major regulators
- Shared principles for AI medical device regulation
- International consensus on validation approaches

Related Terms

SaMDClinical Decision SupportAlgorithmSoftware ValidationRisk ManagementClinical Validation

More Device Classification

View all

Need Help with Global Registration?

Pure Global provides regulatory consulting and AI-powered tools to help medical device companies navigate Global market access.