|
Cpp-Processing
|
This the main class of SDK is used to process images and obtain the results. More...
#include <CelanturSDKInterface.h>
Public Member Functions | |
| Processor (celantur::ProcessorParams params, std::filesystem::path license_path) | |
| Construct the processor object with the given Processor parameters and a path to the license file;. More... | |
| Processor (const Processor &other) | |
| Processor & | operator= (const Processor &other) |
| Processor (Processor &&other) | |
| Processor & | operator= (Processor &&other) |
| celantur::InferenceEnginePluginSettings | get_inference_settings (std::filesystem::path model_path) |
| Get the inference engine plugin settings;. More... | |
| void | load_inference_model (celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params=AdditionalProcessorParams()) |
| Load the preloaded inference model with the given settings;. More... | |
| celantur::InferenceEnginePluginSettings | get_inference_settings () |
| Get the inference engine plugin settings;. More... | |
| void | load_inference_model (std::filesystem::path model_path, celantur::InferenceEnginePluginSettings settings=celantur::InferenceEnginePluginSettings{}) |
| Load the inference model from the given path;. More... | |
| ~Processor () | |
| void | process (cv::Mat mat) |
| Enqueue the given image (mat) for processing; This function is non-blocking and returns immediately;. More... | |
| cv::Mat | get_result () |
| get anonymised image that was processed by process() method More... | |
| std::vector< celantur::CelanturDetection > | get_detections () |
| get the detections that were found in the last image that was processed; Note that this is necessary to call this function exactly once per image posted via the process() function; Otherwise, the detections will stack in the queue and take up memory; You can safely call that and discard the results; More... | |
This the main class of SDK is used to process images and obtain the results.
Currently we don't extensively test what would happen if multiple instances of this class are created; it is recommended to create only one instance of this class and use it throughout the application; the class is already optimised to perform the job as fast as possible in parallel, so there will be no performance gain from creating multiple instances;
When copying this class, you will not get a proper deep copy, but you will get a reference to the same underlying processing class (shallow copy). The only way to create a new instance is to use the normal constructor.
| CelanturSDK::Processor::Processor | ( | celantur::ProcessorParams | params, |
| std::filesystem::path | license_path | ||
| ) |
Construct the processor object with the given Processor parameters and a path to the license file;.
One of the fields of ProcessorParams, the inference engine plugin, is mandatory and the rest are optional; The inference engine plugin is a path to the inference engine plugin library. Read more about inference engine plugins in the documentation: Inference Engines You need to provide a valid path to your selected inference engine plugin.
The license is a path to the license file, provided by Celantur. The license file is a text file that contains the license data.
| params | The parameters for the processor. |
| license_path | The path to the license file. |
References celantur::ProcessorParams::inference_plugin.
| CelanturSDK::Processor::Processor | ( | const Processor & | other | ) |
| CelanturSDK::Processor::Processor | ( | CelanturSDK::Processor && | other | ) |
| CelanturSDK::Processor::~Processor | ( | ) |
| std::vector< celantur::CelanturDetection > CelanturSDK::Processor::get_detections | ( | ) |
get the detections that were found in the last image that was processed; Note that this is necessary to call this function exactly once per image posted via the process() function; Otherwise, the detections will stack in the queue and take up memory; You can safely call that and discard the results;
| celantur::InferenceEnginePluginSettings CelanturSDK::Processor::get_inference_settings | ( | ) |
Get the inference engine plugin settings;.
This function returns the inference engine plugin settings that are needed to load the model with the load_inference_model(std::filesystem::path model_path, celantur::InferenceEnginePluginSettings settings) function.
| celantur::InferenceEnginePluginSettings CelanturSDK::Processor::get_inference_settings | ( | std::filesystem::path | model_path | ) |
Get the inference engine plugin settings;.
This function returns the inference engine plugin settings that are needed to load the model with the load_inference_model(celantur::InferenceEnginePluginSettings settings, AdditionalProcessorParams params) function. It takes model as an input, because some settings can depend not only on the inference engine but can also be dictated by a model, like input size, output size, etc.
| model_path | The path to the model file. |
| cv::Mat CelanturSDK::Processor::get_result | ( | ) |
| void CelanturSDK::Processor::load_inference_model | ( | celantur::InferenceEnginePluginSettings | settings, |
| AdditionalProcessorParams | params = AdditionalProcessorParams() |
||
| ) |
Load the preloaded inference model with the given settings;.
This function loads the model with the given settings and prepares it for processing. After calling this function, the processor is ready to process images.
| settings | The settings for the inference engine plugin. |
| params | The additional parameters for the processor; these parameters are currently fully optional and can be safely omitted. |
References CelanturSDK::AdditionalProcessorParams::context_height, and CelanturSDK::AdditionalProcessorParams::context_width.
| void CelanturSDK::Processor::load_inference_model | ( | std::filesystem::path | model_path, |
| celantur::InferenceEnginePluginSettings | settings = celantur::InferenceEnginePluginSettings{} |
||
| ) |
Load the inference model from the given path;.
This function loads the model from the given path and prepares it for processing;
| model_path | The path to the model file. |
| settings | The settings for the inference engine plugin. |
| CelanturSDK::Processor & CelanturSDK::Processor::operator= | ( | const Processor & | other | ) |
| CelanturSDK::Processor & CelanturSDK::Processor::operator= | ( | CelanturSDK::Processor && | other | ) |
| void CelanturSDK::Processor::process | ( | cv::Mat | mat | ) |
Enqueue the given image (mat) for processing; This function is non-blocking and returns immediately;.
The result can be obtained by calling get_result() and get_detections() functions; You can post multiple images simultaneously to the processor to achieve the best performance. Internally, consists of a queue of images that are processed in parallel in multiple threads. By default, queue size is unlimited, but you can limit it by setting queue_size parameter in the parameters when creating the processor;