Support / How do I share training data between monitors?
Deploy high performing monitors by utilizing similar monitors’ training data.
Accelerate defect detection and expand monitor coverage with Smart Linked Monitors (SLMs) — a powerful way to share training data across similar monitors.
SLMs allow labeled data to be shared between monitors tracking the same failure mode and component type. This improves performance in new regions or SKUs, even when defect examples are limited. Sharing labeled data boosts recall, speeds up deployment, and reduces the number of defect examples needed to achieve full monitor coverage.
By unlocking training data from siloed monitors, SLMs help you scale across thousands of components and multiple SKUs, cutting down setup time and maximizing ROI.
Key Benefits
- Improved Monitor Performance from Day One
Share training data from similar monitors to achieve higher recall, even with limited defect examples.
- Faster Coverage Across Repeated Components
Reuse existing labeled data to cover common parts like screws and connectors without starting from scratch. - Streamlined Monitor Management
Simplify setup and tuning for recurring components across multiple products, SKUs, or image types.
Getting Started with Multi-Region Monitors
Feature Availability
SLMs are currently available hidden behind a feature flag during the beta. Contact your Instrumental admin or support to enable.
Linking Monitors
What does it mean to “link” monitors?
When monitors are linked, they can share labeled training data, enabling better detection performance with fewer examples. Linked monitors can also share data with monitors that have no defect examples, allowing you to extend coverage to new regions, SKUs, and images without needing defect examples in those areas.
What monitors are good candidates to link?
When creating a linked monitor group, it’s important to ensure the monitors are well-suited to share training data. Follow these best practices to get optimal performance:
- Monitor the same type of failure mode. All monitors in the group should be targeting the same defect or issue type.
- Use visually similar regions. Regions should be alike in appearance and size across all monitors in the group.
Monitors that are highly similar—both in defect type and visual appearance—can share both positive and negative labels. If monitors have visually different passing images, they should only share positive labels. There is a limit to how visually different passes can be while still being effective in a linked group.
You can allow greater visual differences among passing results when the failure mode is severe or highly distinct. In such cases, despite variation among passing images, performance remains better than with unlinked monitors.
How to Link Monitors
To link a monitor, navigate to its tile on the Monitor page and click Assign Group.
Clicking Assign Group provides two options: you can either choose an existing group to link the monitor to or create a new group.
When choosing an existing group, make sure the monitor is a good fit by following the guidelines outlined in the previous section.
If you select Create New Group, you’ll be prompted to name the group, add monitors, and choose a data sharing mode.
Editing Linked Monitor Groups
Edit actions for a Linked Monitors Group is accessible by clicking the group name on a monitor in the group.
There are four supported edit actions:
- Rename the monitor group
- Add monitor to group
- Remove monitor from group
- Changing the data sharing mode
All four actions are accessible from the Edit Linked Monitor Group Modal.
Rename the monitor group
To rename a monitor group, update the Group Name field in the Edit Linked Monitor Group modal. Once you’ve made your changes, click Update to save.
Add Monitor to a Group
To add monitors to a linked monitor group, click Add Monitor in the Edit Linked Monitor Group modal.
Removing Monitors from Groups
To remove monitors from a linked monitor group, open the Edit Linked Monitor Group modal from any monitor within the group. Click the X next to the monitors you want to remove, then click Update to save your changes.
You cannot remove the monitor you used to open the modal from within the modal itself. To remove that monitor, exit the modal and use the Remove from [Monitor Group Name] option on its main page.
Hence, the fastest way to completely remove a monitor group is to:
- Open the Edit Linked Monitor Group modal from any monitor in the group.
- Remove all other monitors.
- Click Update.
- Return to the monitor’s page and click Remove from “[Monitor Group Name]” to remove the final monitor and dissolve the group.
Changing the Data Sharing Mode
The Data Sharing Mode can be changed by selecting a new mode from the Data Sharing Mode dropdown and clicking Update. See below for more information on Data Sharing Modes.
Data Sharing Modes
When creating a Smart Linked Monitor group, you can control how labeled training data is shared across monitors. Choosing the right data sharing mode can significantly affect model performance and collaboration outcomes.
There are five distinct data sharing modes, plus two enhanced modes that use smart sampling to optimize learning.
Summary Table: Choosing the Right Mode
Mode | Shares | Smart Sampling | Best Use Case |
Full Dataset – Smart | Positives + Negatives | Yes | Same failure mode on the same component |
Positives Only – Smart | Positives only | Yes | Same failure modes in regions with slight visual differences |
Negatives Only – Smart | Negatives only | Yes | Advanced anomaly monitors. Not recommended |
Positives Only | Positives only | No | Same failure modes in regions with slight visual differences. Rarely recommended over its smart sampling counterpart |
Negatives Only | Negatives only | No | Not recommended |
Full Dataset | Positives + Negatives | No | Full data access without sampling (not preferred) |
Full Model | Entire ML model | N/A | Identical regions where a single shared model is ideal (like pin inspection) |
Smart Sampling Modes (Recommended)
These modes automatically sample relevant data to improve training outcomes. They are the recommended options for most use cases.
Full Dataset – Smart Sampling (Default & Recommended)
Shares: Both failure (positive) and pass (negative) labels
Best for: Groups monitoring the same failure mode on the same component
Why use it: Smartly samples both positives and negatives to enhance individual monitor performance.
Positives Only – Smart Sampling
Shares: Only labels marked as failures (positives)
Best for: Groups monitoring the same failure mode in similar-sized regions with slight visual variation
Why use it: Focuses training on shared failures while allowing flexibility among passes.
Negatives Only – Smart Sampling
Shares: Only labels marked as passes (negatives)
Best for: Advanced anomaly monitors with few units. Consult with your Instrumental representative if this is right for your use case.
Why use it: Focuses training on shared failures while allowing model flexibility.
Standard Sharing Modes (Use with Caution)
These modes offer more direct control by not including sampling logic. They are less optimized and usually not preferred.
Positives Only
Shares: Only failure (positive) labels
Use when: You need to isolate and share only failing examples
Note: Generally not recommended over the smart sampling version.
Negatives Only
Shares: Only pass (negative) labels
Use when: Rarely recommended; limited utility in most training scenarios.
Full Dataset
Shares: All labeled data (both positives and negatives) without sampling
Use when: You need full data access without sampling logic
Note: Generally not recommended over the smart sampling version.
Full Model
Shares: The exact same machine learning model across all monitors in the group
Best for: Identical or nearly identical visual regions (e.g., multiple pins on the same connector)
Why use it: Ensures complete model consistency across monitors, but only suitable for highly similar use cases.
Tuning Linked Monitors
There is currently no dedicated tuning workflow for smart linked monitors. However, you can still tune individual monitors, and those changes will impact the group’s shared training dataset. For details on re-labeling and tuning of monitors, see the Re-Labeling Monitor Results section.
The key difference between the training ungrouped and grouped monitor workflow appears only when the AI Automation settings are set to Manual. Once a new AI model is available you’ll have the option to apply the model to the re-labeled monitor or to all linked monitors.
If you would like to change your AI Automation settings, please reach out to your Instrumental representative.
Best Practices
Bulk actions to avoid transient state
When working with monitor groups, it’s best to make changes in bulk. This helps avoid issues caused by the group entering a transient state from updates like adding, removing, or re-labeling monitors.
What counts as a bulk action:
- Add all relevant monitors to the group before clicking Update.
- Remove all relevant monitors before clicking Update.
- Apply re-labeling actions without pausing for more than three minutes, to prevent automatic retraining from starting mid-process.
Not following these best practices may result in a Pending Candidate Model error, which temporarily blocks edits to the group while a model retrains. In some cases, simply refreshing the page can resolve this error.
Summary: Putting Smart Linked Monitors to Work
You’ve now learned how Smart Linked Monitors (SLMs) enable the sharing of valuable training data between monitors targeting the same failure modes on visually similar components. This feature is designed to improve monitor performance, speed up the deployment process across multiple SKUs or regions, and simplify the management of recurring component inspections.
Key takeaways include:
- Link strategically: Group monitors with the same failure mode and similar visual characteristics.
- Choose the right mode: Select a data sharing mode (like the recommended Smart Sampling options) that fits your group’s needs.
By effectively implementing SLMs, you unlock the collective knowledge within your labeled data, leading to faster, more robust defect detection across your operations.