raic-foundry package
raic_vision module
Raic Vision models are a powerful tool to perform object detection and classification without costly model training.
Here are some quickstart examples on creating a run for these models. Make sure to first login to Raic Foundry
from raic.foundry.client.context import login_if_not_already
# Login to Raic Foundry (prompted on the command line)
login_if_not_already()
Example: Perform objection detection with a raic vision model
from raic.foundry.datasources import Datasource
from raic.foundry.raic_vision import RaicVisionRun
# Look up existing data source record
data_source = Datasource.from_existing('My Existing Data Source')
# Look up models from model registry
raic_vision_model = RaicVisionModel.from_existing('My Raic Vision Model', version='latest')
# Start new raic vision run
run = RaicVisionRun.new(name='My New Inference Run', data_source=data_source, raic_vision_model=raic_vision_model)
while not run.is_complete():
time.sleep(10)
data_frame = run.fetch_predictions_as_dataframe()
print(data_frame)
Example: Iterating results from query as an alternative
from raic.foundry.inference import RaicVisionRun
...
for prediction in run.iterate_predictions():
print(prediction)
- class raic_vision.RaicVisionRun(record: dict)[source]
Bases:
InferenceRun- classmethod from_existing(identifier: str)[source]
Look up an existing inference run by its UUID or its name Note: If there are multiple runs with the same name looking up by name will fail with an Exception
- Parameters:
identifier (str) – Either the UUID of the inference run or its name
- Raises:
Exception – If multiple runs are returned with the same name
- Returns:
InferenceRun
- classmethod from_prompt(data_source: Datasource, name: str | None = None, raic_vision_model: RaicVisionModel | None = None)[source]
- classmethod new(name: str, data_source: Datasource, raic_vision_model: RaicVisionModel | str)[source]
Create a new raic vision inference run
- Parameters:
name (str) – Name of new inference run
data_source (Datasource) – Data source object representing imagery already uploaded to a blob storage containers
raic_vision_model (RaicVisionModel | str) – Model combining all three previous models into one.
- Raises:
Exception – If no vectorizer model is specified
- Returns:
InferenceRun
datasources module
Data source to use as an input to an inference run
Here are some quickstart examples. Make sure to first login to Raic Foundry
from raic.foundry.client.context import login_if_not_already
# Login to Raic Foundry (prompted on the command line)
login_if_not_already()
Example: Create new data source from local imagery
from raic.foundry.datasources import Datasource
# Create data source record and upload imagery
name = 'My New Data Source'
local_path = '[Local Imagery]'
data_source = Datasource.new_from_local_folder(name, local_path)
print(data_source)
Example: Look up existing data source by name
from raic.foundry.datasources import Datasource
# Look up existing data source record
name = 'My Existing Data Source'
data_source = Datasource.from_existing(name)
print(data_source)
Example: Look up existing data source by UUID
from raic.foundry.datasources import Datasource
# Look up existing data source record
id = '72350d6d-65b6-4742-a8e0-4753ae92d0e2'
data_source = Datasource.from_existing(id)
print(data_source)
- class datasources.Datasource(datasource_id: str, record: dict, local_path: Path | None = None, needs_upload: bool | None = False)[source]
Bases:
object- classmethod from_existing(identifier: str) Datasource[source]
Look up an existing data source by its UUID or its name Note: If there are multiple datasources with the same name looking up by name will fail with an Exception
- Parameters:
identifier (str) – Either the UUID of the datasource or its name
- Raises:
Exception – If multiple datasources are returned with the same name
- Returns:
Datasource
- classmethod from_prompt(prepare_imagery: bool = True, upload_imagery: bool = True) Datasource[source]
- classmethod new_from_local_folder(name: str, local_path: Path | str, prepare_imagery: bool = True, upload_imagery: bool = True) Datasource[source]
Create new data source from local imagery If prepare_imagery is set to True (default) then the local folder will be searched for usable imagery files.
PLEASE NOTE: this may require more than twice the original disk space as the original imagery
For each found the following transformations will be made: 1) Archive files (.zip, .tar, .bz2, .gz, .xz) will be unpacked 2) Geospatial raster files (all single-file formats supported by gdal, multifile not yet supported) will be transformed to EPSG:4326 geotiff (.tif) 3) Geotiff (.tif) files larger than 9792px in width or height will be separated into smaller tiles of 9792px 4) Imagery formats (.jpg, .png, .bmp, .gif) are read and left unchanged
- Parameters:
name (str) – Desired name of the new data source
local_path (Path | str) – Local path contains imagery to upload to data source (aka blob storage container)
prepare_imagery (bool, optional) – Whether to transform imagery in the local folder. Defaults to True.
- Raises:
Exception – If local folder does not exist
Exception – If local folder contains not files
- Returns:
Datasource
- prepare()[source]
Search the local folder for usable imagery files.
PLEASE NOTE: this may require more than twice the original disk space as the original imagery
For each found the following transformations will be made: 1) Archive files (.zip, .tar, .bz2, .gz, .xz) will be unpacked 2) Geospatial raster files (all single-file formats supported by gdal, multifile not yet supported) will be transformed to EPSG:4326 geotiff (.tif) 3) Geotiff (.tif) files larger than 9792px in width or height will be separated into smaller tiles of 9792px 4) Imagery formats (.jpg, .png, .bmp, .gif) are read and left unchanged
- Raises:
Exception – If local folder does not exist
Exception – If local folder contains not files
inference module
Inference run execute raic foundry object detection, vectorization and prediction
Inference runs can serve a variety of different purposes. They can operate on both geospatial and non-geospatial imagery formats, taking into account their temporal tags whenever possible.
Here are some quickstart examples. Make sure to first login to Raic Foundry
from raic.foundry.client.context import login_if_not_already
# Login to Raic Foundry (prompted on the command line)
login_if_not_already()
Example: Object detect and vectorize crops using default models
from raic.foundry.datasources import Datasource
from raic.foundry.inference import InferenceRun
# Look up existing data source record
data_source = Datasource.from_existing('My Existing Data Source')
# Start new inference run
run = InferenceRun.new(name='My New Inference Run', data_source=data_source)
data_frame = run.wait_and_return_dataframe()
print(data_frame)
Example: Only vectorize images (aka classification only)
from raic.foundry.datasources import Datasource
from raic.foundry.inference import InferenceRun
# Look up existing data source record
data_source = Datasource.from_existing('My Existing Data Source')
# Start new inference run
run = InferenceRun.new(name='My New Inference Run', data_source=data_source, universal_detector=None)
data_frame = run.wait_and_return_dataframe()
print(data_frame)
Example: Fully customize universal detector, vectorizer model as well as a prediction model
from raic.foundry.datasources import Datasource
from raic.foundry.models import UniversalDetector, VectorizerModel, PredictionModel
from raic.foundry.inference import InferenceRun
# Look up existing data source record
data_source = Datasource.from_existing('My Existing Data Source')
# Look up models from model registry
universal_detector = UniversalDetector.from_existing('baseline', version='latest')
vectorizer_model = VectorizerModel.from_existing('baseline', version='latest')
prediction_model = PredictionModel.from_existing('My Prediction Model', version='latest')
# Start new inference run
run = InferenceRun.new(
name='CM Inference Run',
data_source=data_source,
universal_detector=universal_detector,
vectorizer_model=vectorizer_model,
prediction_model=prediction_model
)
data_frame = run.wait_and_return_dataframe()
print(data_frame)
Example: Iterating results from query as an alternative
from raic.foundry.inference import InferenceRun
...
for prediction in run.iterate_predictions():
print(prediction)
- class inference.InferenceRun(record: dict, is_raic_vision: bool = False)[source]
Bases:
object- fetch_predictions_as_dataframe(include_embeddings: bool = True) DataFrame[source]
Collect all of the prediction results from the inference run
- Parameters:
include_embeddings (bool, optional) – Include the embedding vector with each prediction. Defaults to True.
- Returns:
All of the prediction results as a pandas DataFrame, optionally including the embeddings for each
- Return type:
DataFrame
- classmethod from_existing(identifier: str)[source]
Look up an existing inference run by its UUID or its name Note: If there are multiple runs with the same name looking up by name will fail with an Exception
- Parameters:
identifier (str) – Either the UUID of the inference run or its name
- Raises:
Exception – If multiple runs are returned with the same name
- Returns:
InferenceRun
- classmethod from_prompt(data_source: Datasource, name: str | None = None, universal_detector: UniversalDetector | None = None, vectorizer_model: VectorizerModel | None = None, prediction_model: PredictionModel | None = None, raic_vision_model: RaicVisionModel | None = None)[source]
- is_complete() bool[source]
Check whether the run has completed yet
- Returns:
True if run status is Completed
- Return type:
bool
- iterate_predictions(include_embeddings: bool = True) Iterator[dict][source]
Iterate through all inference run prediction results as they are queried from the API
- Parameters:
include_embeddings (bool, optional) – Include the embedding vector with each prediction. Defaults to True.
- Yields:
Iterator[dict] – All of the prediction results as an iterator, optionally including the embeddings for each
- classmethod new(name: str, data_source: Datasource, universal_detector: UniversalDetector | str | None = 'baseline', vectorizer_model: VectorizerModel | str | None = 'baseline', prediction_model: PredictionModel | str | None = None, raic_vision_model: RaicVisionModel | str | None = None)[source]
Create a new inference run
- Parameters:
name (str) – Name of new inference run
data_source (Datasource) – Data source object representing imagery already uploaded to a blob storage container
universal_detector (Optional[UniversalDetector | str], optional) – Model for object detection. Defaults to ‘baseline’.
vectorizer_model (Optional[VectorizerModel | str]) – Model for vectorizing detection drop images. Defaults to ‘baseline’.
prediction_model (Optional[PredictionModel | str], optional) – Model for classifying detections without needing deep training. Defaults to None.
raic_vision_model (Optional[RaicVisionModel | str], optional) – Model combining all three previous models into one. Defaults to None.
- Raises:
Exception – If no vectorizer model is specified
- Returns:
InferenceRun
- restart()[source]
In the event that an inference run gets stuck it can be restarted from the beginning. Any frames already processed will be skipped.
- stream_crop_images(destination_path: Path | str, max_workers: int | None = None) Iterator[Path][source]
Download the crops images for inference run predictions
Each one is named by its prediction identifier
- Parameters:
destination_path (Path | str) – Local folder where prediction crops will be downloaded.
max_workers (Optional[int], optional) – Max number of worker threads to parallelize download. Defaults to None.
- Yields:
Iterator[Path] – Iterator of each crop image local path as its downloaded
- wait_and_return_dataframe(poll_interval: int = 10, include_embeddings: bool = True) DataFrame[source]
Wait for inference run to complete return predictions as a data frame
- Parameters:
poll_interval (int, optional) – Polling interval in seconds. Minimum value is 5 seconds. Defaults to 10 seconds.
include_embeddings (bool, optional) – Include the embedding vector with each prediction. Defaults to True.
- Returns:
All of the prediction results as a pandas DataFrame, optionally including the embeddings for each
- Return type:
DataFrame
models module
- class models.MLModel(id: str, version: int, record: dict)[source]
Bases:
ABC
- class models.PredictionModel(id: str, version: int, record: dict)[source]
Bases:
MLModel- classmethod from_existing(identifier: str, version: int | str | None = 'latest') PredictionModel[source]
- classmethod from_prompt() PredictionModel[source]
- class models.RaicVisionModel(id: str, version: int, record: dict)[source]
Bases:
MLModel- classmethod from_existing(identifier: str, version: int | str | None = 'latest') RaicVisionModel[source]
- classmethod from_prompt() RaicVisionModel[source]
- class models.UniversalDetector(id: str, version: int, record: dict, iou: float | None = None, confidence: float | None = None, max_detects: int | None = None, small_objects: bool | None = None)[source]
Bases:
MLModel- classmethod from_existing(identifier: str, version: int | str | None = 'latest', iou: float | None = None, confidence: int | None = None, max_detects: int | None = None) UniversalDetector[source]
- classmethod from_prompt() UniversalDetector[source]
- class models.VectorizerModel(id: str, version: int, record: dict)[source]
Bases:
MLModel- classmethod from_existing(identifier: str, version: int | str | None = 'latest') VectorizerModel[source]
- classmethod from_prompt() VectorizerModel[source]