Cpp-Processing
Public Member Functions | List of all members
CelanturSDK::Processor Class Reference

This the main class of SDK is used to process images and obtain the results. More...

#include <CelanturSDKInterface.h>

Public Member Functions

 Processor (celantur::ProcessorParams params, std::filesystem::path license_path)
 Construct the processor object with the given Processor parameters and a path to the license file;. More...
 
 Processor (const Processor &other)
 
Processoroperator= (const Processor &other)
 
 Processor (Processor &&other)
 
Processoroperator= (Processor &&other)
 
celantur::InferenceEnginePluginSettings get_inference_settings (std::filesystem::path model_path)
 Get the inference engine plugin settings;. More...
 
void load_inference_model (celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params=AdditionalProcessorParams())
 Load the preloaded inference model with the given settings;. More...
 
celantur::InferenceEnginePluginSettings get_inference_settings ()
 Get the inference engine plugin settings;. More...
 
void load_inference_model (std::filesystem::path model_path, celantur::InferenceEnginePluginSettings settings=celantur::InferenceEnginePluginSettings{})
 Load the inference model from the given path;. More...
 
 ~Processor ()
 
void process (cv::Mat mat)
 Enqueue the given image (mat) for processing; This function is non-blocking and returns immediately;. More...
 
cv::Mat get_result ()
 get anonymised image that was processed by process() method More...
 
std::vector< celantur::CelanturDetectionget_detections ()
 get the detections that were found in the last image that was processed; Note that this is necessary to call this function exactly once per image posted via the process() function; Otherwise, the detections will stack in the queue and take up memory; You can safely call that and discard the results; More...
 

Detailed Description

This the main class of SDK is used to process images and obtain the results.

Currently we don't extensively test what would happen if multiple instances of this class are created; it is recommended to create only one instance of this class and use it throughout the application; the class is already optimised to perform the job as fast as possible in parallel, so there will be no performance gain from creating multiple instances;

When copying this class, you will not get a proper deep copy, but you will get a reference to the same underlying processing class (shallow copy). The only way to create a new instance is to use the normal constructor.

Constructor & Destructor Documentation

◆ Processor() [1/3]

CelanturSDK::Processor::Processor ( celantur::ProcessorParams  params,
std::filesystem::path  license_path 
)

Construct the processor object with the given Processor parameters and a path to the license file;.

One of the fields of ProcessorParams, the inference engine plugin, is mandatory and the rest are optional; The inference engine plugin is a path to the inference engine plugin library. Read more about inference engine plugins in the documentation: Inference Engines You need to provide a valid path to your selected inference engine plugin.

The license is a path to the license file, provided by Celantur. The license file is a text file that contains the license data.

Parameters
paramsThe parameters for the processor.
license_pathThe path to the license file.

References celantur::ProcessorParams::inference_plugin.

◆ Processor() [2/3]

CelanturSDK::Processor::Processor ( const Processor other)

◆ Processor() [3/3]

CelanturSDK::Processor::Processor ( CelanturSDK::Processor &&  other)

◆ ~Processor()

CelanturSDK::Processor::~Processor ( )

Member Function Documentation

◆ get_detections()

std::vector< celantur::CelanturDetection > CelanturSDK::Processor::get_detections ( )

get the detections that were found in the last image that was processed; Note that this is necessary to call this function exactly once per image posted via the process() function; Otherwise, the detections will stack in the queue and take up memory; You can safely call that and discard the results;

◆ get_inference_settings() [1/2]

celantur::InferenceEnginePluginSettings CelanturSDK::Processor::get_inference_settings ( )

Get the inference engine plugin settings;.

This function returns the inference engine plugin settings that are needed to load the model with the load_inference_model(std::filesystem::path model_path, celantur::InferenceEnginePluginSettings settings) function.

Deprecated:
this method is deprecated because it does not allow for the model-specific settings that are needed for some inference engine plugins, like OpenVINO and TensorRT. We suggest using the new method get_inference_settings(std::filesystem::path model_path). This function will still work for backward compatibility, but it will be removed in future major versions.

◆ get_inference_settings() [2/2]

celantur::InferenceEnginePluginSettings CelanturSDK::Processor::get_inference_settings ( std::filesystem::path  model_path)

Get the inference engine plugin settings;.

This function returns the inference engine plugin settings that are needed to load the model with the load_inference_model(celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params) function. It takes model as an input, because some settings can depend not only on the inference engine but can also be dictated by a model, like input size, output size, etc.

Parameters
model_pathThe path to the model file.

◆ get_result()

cv::Mat CelanturSDK::Processor::get_result ( )

get anonymised image that was processed by process() method

Note that this is necessary to call this function exactly once per image posted via process() function; Otherwise, the images will stack in the queue and take up memory;

Returns
cv::Mat The processed image with anonymisation applied;

◆ load_inference_model() [1/2]

void CelanturSDK::Processor::load_inference_model ( celantur::InferenceEnginePluginSettings  settings,
AdditionalProcessorParams  params = AdditionalProcessorParams() 
)

Load the preloaded inference model with the given settings;.

This function loads the model with the given settings and prepares it for processing. After calling this function, the processor is ready to process images.

Parameters
settingsThe settings for the inference engine plugin.
paramsThe additional parameters for the processor; these parameters are currently fully optional and can be safely omitted.
See also
load_inference_model(celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params = AdditionalProcessorParams());

References CelanturSDK::AdditionalProcessorParams::context_height, and CelanturSDK::AdditionalProcessorParams::context_width.

◆ load_inference_model() [2/2]

void CelanturSDK::Processor::load_inference_model ( std::filesystem::path  model_path,
celantur::InferenceEnginePluginSettings  settings = celantur::InferenceEnginePluginSettings{} 
)

Load the inference model from the given path;.

This function loads the model from the given path and prepares it for processing;

Deprecated:
this method is deprecated because it does not allow for the model-specific settings that are needed for some inference engine plugins, like OpenVINO and TensorRT. We suggest using the new method load_inference_model(celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params). This function will still work for backward compatibility, but it will be removed in future major versions.
See also
load_inference_model(celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params = AdditionalProcessorParams());
Parameters
model_pathThe path to the model file.
settingsThe settings for the inference engine plugin.

◆ operator=() [1/2]

CelanturSDK::Processor & CelanturSDK::Processor::operator= ( const Processor other)

◆ operator=() [2/2]

CelanturSDK::Processor & CelanturSDK::Processor::operator= ( CelanturSDK::Processor &&  other)

◆ process()

void CelanturSDK::Processor::process ( cv::Mat  mat)

Enqueue the given image (mat) for processing; This function is non-blocking and returns immediately;.

The result can be obtained by calling get_result() and get_detections() functions; You can post multiple images simultaneously to the processor to achieve the best performance. Internally, consists of a queue of images that are processed in parallel in multiple threads. By default, queue size is unlimited, but you can limit it by setting queue_size parameter in the parameters when creating the processor;