This page is about writing software to send images and video from a factory to Instrumental. It assumes that you have read How do I import and use images and video? and have worked with an Instrumental Solutions Architect to decide that this is the best implementation option. If that’s the case, that means you have a way to obtain the images or videos you want to send, as well as a place you can run the integration script you will need to write.
After you purchase Image Streams or Video Streams, Instrumental will set up a test project in the Instrumental Web App so that you can develop an integration without sending test data to your “real” project.
Table of Contents
API Keys
To send data, you will need an API Key with “write” permissions. You can read about API Keys and how to obtain them on the API Keys page.
Image and Video Streams API
Domain/endpoint
Send data to https://api.instrumental.ai/api/v0/externalData/ingestImage
with the POST
HTTP method.
Please work with your networking teams to unblock TCP on port 443 (HTTPS access) for this domain on whatever network your integration script will run. The domain does not have a fixed IP address or IP range, so the domain itself must be allowed. If necessary, Instrumental can discuss setting up a proxy with a fixed IP.
Requests must have the following headers:
instrumental-api-key: YOUR_API_KEY
content-type: application/json
Of course, replace YOUR_API_KEY
with your actual, full key (it should contain INST:V1
– don’t use only the identifier displayed in the API key modal).
Default limits
Uploaded images and videos may be up to 50 megabytes. Larger requests will be rejected with a 413 Too Large error. The JSON component of requests may be up to 4MB.
Clients have a rate limit of 50 ingest requests refilled at a rate of 5 per second, shared across all ingest API endpoints. This allows some burst capacity if you need to send more requests for a short time. In some circumstances rate limits may be applied per factory site. If the rate limit is exceeded, requests will be rejected with a 429 Too Many Requests error until the rate limit bucket refills enough for new requests to get through.
Additionally, when uploading images and videos, you can specify metadata such as the name of the station that took the image or video. There are limits on how many different values may ever be provided in these fields per project:
- 50 builds
- 50 lines
- 1000 configs
- 500 station names
- 1000 station-fixture IDs
- 1000 image/video types
Each unit (unique serial number) may have no more than 500 image/video inspections, 500 total images/videos, and 349 subassemblies. In cases where subassemblies are merged into a final assembly, these limits apply to the merged unit, not to each component individually. If a merged unit exceeds these limits, only a subset of data will be shown, along with a warning that this is the case.
If you expect to exceed any of the limits described above, please discuss with your Account Manager or Solutions Architect.
Request
The endpoint accepts content in multipart/form-data
format with two parts: an image or video file with a file
key and a JSON object with a data
key.
Examples
-
Click to see an example using cURL on the command line:
curl -i -X POST -H "Content-Type: multipart/form-data" \ -H "instrumental-api-key: YOUR_API_KEY" \ -F "file=@/Users/TestUser/Pictures/test-image.jpeg" \ -F "data={\"serialNumber\":\"SN-ABCD1234\",\"stationName\":\"Station-Test-1\",\"imageType\":\"Test-Image-Type\",\"imageTimestamp\":{\"ianaTimeZone\":\"Asia/Shanghai\",\"iso8601Time\":\"2020-02-14T00:52:10.499Z\"}}" \ https://api.instrumental.ai/api/v0/externalData/ingestImage
(If you want to test this, make sure to replace
YOUR_API_KEY
with your actual API key, and provide a path to a real file.) -
Click to see an example using Python:
This example uses Python 3 and assumes the requests library is installed:
#!/usr/bin/env python3 import argparse import json import mimetypes import os import requests # https://requests.readthedocs.io/en/master/ from datetime import datetime def main(): """ Runs when the program starts. Try `python3 uploader.py --help` for info. """ parser = argparse.ArgumentParser(description="Upload an image or video to Instrumental") parser.add_argument("infile", type=argparse.FileType("rb"), help="the file to upload") parser.add_argument("--apikey", type=str, default="", help="the project API key to use to upload. Omit for a dry run") parser.add_argument("--station", type=str, default="Assembly step", help="the name of the station where the image/video was taken") parser.add_argument("--tz", type=str, default="America/Los_Angeles", help="the IANA time zone where the image/video was taken") parser.add_argument("--verbose", action="store_true", help="pretty-print the request and response JSON") args = parser.parse_args() url = "https://api.instrumental.ai/api/v0/externalData/ingestImage" headers = { "instrumental-api-key": args.apikey, } file_path = args.infile.name file_name = os.path.splitext(os.path.basename(file_path))[0] serial_number = file_name media_content_type = mimetypes.guess_type(file_path)[0] # To parse a date out of a file path like "2022/06/29/video.mp4" instead of using the current date, you can write something like: # datetime.strptime(os.path.dirname(file_path), "%Y/%m/%d") date = datetime.now() json_request = { "serialNumber": serial_number, "stationName": args.station, "imageType": args.station, "imageTimestamp": { "ianaTimeZone": args.tz, "iso8601Time": date.isoformat(timespec="milliseconds") + "Z" }, } request = { "data": (None, json.dumps(json_request), 'application/json'), "file": (file_path, args.infile, media_content_type) } if args.verbose: print("REQUEST: {}".format(json.dumps(json_request, indent=4))) if args.apikey: response = requests.post(url, files=request, headers=headers) if args.verbose: print("RESPONSE: {}".format(response.status_code)) print(response.content.decode('utf-8')) else: print("No API key provided; file was not uploaded") if __name__ == "__main__": main()
Validation
When images and videos are uploaded, they are validated asynchronously and scanned for viruses before they become available in the Instrumental web app. To pass validation, images must be in JPEG format and videos must be in either MP4 or WEBM format. Images and videos must be 32MP or less (roughly 6528x4896px at a 4:3 aspect ratio, for example). If the file’s name does not end in .jpg, .jpeg, .mp4, or .webm, it will be rejected synchronously.
The video must use a codec that can actually play in web browsers (e.g. H264, AV1, VP9, VP8 – not mp4v). You can check your video’s codec by installing
ffmpeg (on macOS with homebrew, you can install with brew install ffmpeg
) and running this command, substituting the path to your video:
ffprobe -v error -select_streams v:0 -show_entries stream=codec_name -of default=noprint_wrappers=1:nokey=1 /PATH/TO/VIDEO.mp4
You should get a response like h264
. If you get mpeg4
instead, convert your video to the h264 codec with this command, substituting the paths:
ffmpeg -i /PATH/TO/INPUT_VIDEO.mp4 -c:v libx264 /PATH/TO/OUTPUT_VIDEO.mp4
Parameters
Here is the shape of the JSON part of the request:
{
"serialNumber": "Unit SN",
"stationName": "Station name",
"imageType": "Name of what the image/video shows",
"imageTimestamp": {
"ianaTimeZone": "America/Los_Angeles",
"iso8601Time": "YYYY-MM-DDTHH:mm:ss.SSSZ"
},
"tags": [
"Optional tag 1",
"Optional tag 2",
],
"configName": "Optional config name",
"imageConfigs": {
"configs": {
"CustomAttribute": "CustomValue"
}
},
"fixtureName": "Optional station-fixture ID",
"fileName": "Optional file name",
"lineName" : "Optional line name",
"buildName" : "Optional build name",
"parentAssembly": {
"unitSerial": "Parent SN",
"relationshipName": "Display"
},
"subassemblies": [
{
"unitSerial": "Child SN",
"relationshipName": "Coverglass"
},
...
]
}
In more detail:
Required parameters
serialNumber: string
The serial number associated with the inspected unit. The uploaded image/video will be shown with this SN in the web app. If you have images/videos that are not associated with a specific serial number (e.g. if you have a lot code but not separate IDs for specific units in that lot) please consult with your Instrumental Solutions Architect about what to put in this field. Must not be an empty string.
stationName: string
Name of the station where the image/video was captured. Must not be an empty string.
imageType: string
Describes what is in the image/video, e.g. “PCB Post-Glue” or “Finished Good Top.” Should not be unique per image/video; rather, images/videos that have the same view of the product at the same point in the assembly line should have the same imageType
. Because of this, each imageType
should only be used at one station. Discuss with your Instrumental Customer Success representative if you plan to move an imageType
to a different station or if you plan to change what is in the view without changing the name of the imageType
or using different imageConfigs
(discussed below). When you create a monitor to catch defects in images, the monitor will be associated with an imageType
and will analyze all new photos with that imageType
. Must not be an empty string.
imageTimestamp.iso8601Time: ISO 8601 datetime string
The time at which this image/video was taken, as an ISO 8601 time string, i.e.: “YYYY-MM-DDTHH:mm:ss.SSSZ”. The string does not have to be in UTC, but note that any offset here will be used only for determining the moment in time — not for any additional time zone math (e.g. calculating the “clock-on-the-wall” time). Instead, the zone string provided in the “ianaTimeZone” field of this record will be used for that.
imageTimestamp.ianaTimeZone: DateTimeZone
The time zone in which this image/video was taken, as a time zone name found in the IANA tz database, e.g.: “America/Los_Angeles”, “Asia/Shanghai”.
Optional parameters
tags: array[string]
Short metadata associated directly with the image/video that should be present when viewing the image/video in the web app. Up to 100 tags may be provided per request. Tags are best used when they are non-unique but also not on every image/video, for example as a way of indicating that an engineer pulled the depicted unit off the line for teardown. However, rather than using tags, it’s often preferable to use the Data Streams API to upload text-based data. Data Streams supports structured data, so it can be visualized in more interesting ways.
configName: string
The product variant/SKU depicted in the image/video.
imageConfigs: map[string, string]
Custom attributes that Monitor and Discover can filter on to avoid detecting expected variations as anomalies. Currently ignored for videos.
fixtureName: string
The ID of the fixture/camera used for capturing the image/video. Useful when there are multiple cameras at the same station.
fileName: string
Mostly ignored. If you upload an image/video via the Image Streams API and then download it later through the web app, the name of the downloaded file will start with a random prefix and end with a sanitized version of this specified filename if provided; if not provided, a filename might be extracted from the file portion of the upload request and used as the suffix instead.
lineName: string
The name of the assembly line on which the image/video was captured. If one isn’t provided, the line name will be recorded as “Image Stream Line.”
buildName: string
The build that was active when this image/video was taken. If one isn’t provided, the project’s current build will be used.
parentAssembly: APIExternalDataAssemblyRelationship
subassemblies: array[APIExternalDataAssemblyRelationship]
Note: these fields are only available to projects created after April 18, 2022. Contact Instrumental if you want to use them for projects created before that.
These fields define how components with their own serial numbers are combined to produce the final assembly. For example, a device might have a structure like this:
Final assembly: ENC2124
├─ PCB subassembly: PCB4149
├─ Display subassembly: DIS8801
| ├─ Coverglass subassembly: GLS1035
├─ Battery subassembly: BAT7257
It is likely that the relationships will not be known at every inspection step – for example, when the battery is inspected, it may not be known which enclosure it will go into. That’s fine; the relationships only have to be uploaded once, and only in one direction (though doing it again won’t hurt).
Assemblies can have multiple subassemblies but only one “active” parent. If an assembly already has a parent and you upload a relationship that would give it a different parent, the more recent request becomes the “active” one. Please note that assembly relationships may not have cycles. That is, an error will be returned when attempting to upload a relationship that would cause an assembly to have another assembly among both its ancestors and descendants – even if one of the links in that chain is not “active.”
If you upload a subassembly without a parent, it will initially be represented as its own unit in the Instrumental app. When a relationship with a parent is established, it will be merged with the parent unit. This merging allows data for all the components in a final assembly to be reviewed and correlated together. If a component’s active parent assembly changes, it will be un-merged from its previous parent and merged into the new one.
If you are doing a bulk upload, your data will be available in the app faster if you upload units’ relationships before or in the same request as you upload those units’ first inspections because Instrumental can then merge the assemblies immediately instead of creating them separately and merging them later.
APIExternalDataAssemblyRelationship
unitSerial: string
The serial number of the unit on the other side of the described relationship.
relationshipName: string
The name of the subassembly as it relates to the parent assembly, e.g. “Left display”.
Response
If the request is successful, the response will have an HTTP status code of 200, and the response body will be a JSON object with just a fileId
key, like { "fileId": "e0Blm7EjlLQ2" }
. After the request returns, there will be a delay of typically up to 60 seconds before the file will appear in the Instrumental web app. To check the status of your uploads, see the Checking file status section below.
Once the file is available in the web app, you can visit https://app.instrumental.ai/files/{fileId}
to see it, replacing {fileId}
with the string from the API response. More likely, you will want to find it by browsing/filtering units in the Explore tab.
Failed requests will receive one of the following HTTP status codes:
- 400 – Failure to parse or validate the request
- 401 – Invalid API key in the header
- 413 – Request is too large; see the Default limits section
- 429 – Rate limits exceeded; see the Default limits section
- 500 – Server error
If a request fails with one of these status codes, none of the parts of the request will be saved.
In the case of a 400 error, the response body will normally have the structure { "code": "CODE", "msg": "Explanation" }
. The code will be one of the following strings:
- INVALID_DATA: The
serialNumber
,stationName
, orimageType
field is empty, or the file’s name does not end in one of the allowed file extensions. - REQUEST_DATA_NOT_FOUND: No
data
section was present in the request. - IMAGE_FILE_NOT_FOUND: No
file
section was present in the request. - FILE_TOO_BIG: The file portion of the request exceeded the maximum size; see the Default limits section.
- LIMIT_EXCEEDED: This request would have created too many entities; see the Default limits section.
- INVALID_ASSEMBLY_RELATIONSHIPS: At least one of the specified subassembly/parent assembly relationships in the request caused a problem such as exceeding the maximum number of allowed subassemblies.
However, if the request is not formatted as valid JSON or if the JSON structure does not match the format described above, the response body will have a different format that describes what is wrong. For example, you may see a message like this:
{"error":"Could not parse request body as the expected JSON format.","reason":"ERROR :: /fieldName :: explanation of what shape the field is supposed to be"}
Reviewing errors
API requests that authenticate successfully (i.e. they have a valid instrumental-api-key
header and are not rate limited) but fail other validation (e.g. the request body is invalid) will be temporarily accessible for debugging through the API key modal. You can read more about this on the API Keys page.
Checking file status
When a file is uploaded, it is asynchronously validated before it becomes available in the Instrumental web app, and this process can reject the file. Since upload requests return before this happens, you may want to programmatically check to find out if your file was validated successfully.
Request
To do this, make an HTTP GET
request to https://api.instrumental.ai/api/v0/externalData/fileStatus?ids=FILE_ID
, replacing FILE_ID
with a file ID string returned by an ingestImage
request. You can request the status of multiple files at once by adding multiple ids
parameters to the request. Here’s an example using cURL:
curl -XGET -H 'instrumental-api-key: YOUR_API_KEY' 'https://api.instrumental.ai/api/v0/externalData/fileStatus?ids=FILE_ID_1&ids=FILE_ID_2'
(If you want to test this, make sure to replace YOUR_API_KEY
with your actual API key, and provide real file IDs.)
Response
If successful, the response will have an HTTP status code of 200, and the response body will be a JSON object mapping the input file IDs to their statuses. Here’s an example:
{
"statuses": {
"e0Blm7EjlLQ2": "UPLOADED",
"s5tZdX5qDuN5": "PENDING_VALIDATION"
}
}
The file status code can be one of the following:
- PENDING_VALIDATION: The file has been uploaded and validation will start soon or is already in progress.
- FAILED_VALIDATION: The file was detected to be malicious or did not meet other requirements such as file type checks.
- UPLOADED: The file has been validated and is available in the Instrumental web app.
This request can fail for the usual reasons mentioned above (e.g. returning a 401 if a valid API key is not provided). In addition, the request will fail with an HTTP 400 status code if any of the provided file IDs are not valid. In that case the response body will have the structure { "code": "CODE", "msg": "Explanation" }
with a code of either COULD_NOT_PARSE
(if the ID string is malformed) or FILE_NOT_FOUND
(if the ID does not reference an accessible file).