Cpp-Processing
Public Member Functions | List of all members
CelanturSDK::ModelCompiler Class Reference

This class is used to compile the model for the inference engine plugin;. More...

#include <CelanturSDKInterface.h>

Public Member Functions

 ModelCompiler (std::filesystem::path license, ModelCompilerParams params)
 
 ModelCompiler (const ModelCompiler &other)
 
ModelCompileroperator= (const ModelCompiler &other)
 
 ModelCompiler (ModelCompiler &&other) noexcept
 
ModelCompileroperator= (ModelCompiler &&other) noexcept
 
 ~ModelCompiler ()
 
celantur::InferenceEnginePluginCompileSettings preload_model (std::filesystem::path model_path)
 Preload the model from the given path and return the settings that are needed to compile the model;. More...
 
void compile_model (celantur::InferenceEnginePluginCompileSettings settings, std::filesystem::path output_path)
 Compile the preloaded model with the given settings and save it to the given output path;. More...
 

Detailed Description

This class is used to compile the model for the inference engine plugin;.

If your inference engine plugin requires model compilation, like TensorRT or OpenVINO, you need to use this class to compile the model before using it in the Processor class;

Constructor & Destructor Documentation

◆ ModelCompiler() [1/3]

CelanturSDK::ModelCompiler::ModelCompiler ( std::filesystem::path  license,
ModelCompilerParams  params 
)

◆ ModelCompiler() [2/3]

CelanturSDK::ModelCompiler::ModelCompiler ( const ModelCompiler other)

◆ ModelCompiler() [3/3]

CelanturSDK::ModelCompiler::ModelCompiler ( CelanturSDK::ModelCompiler &&  other)
noexcept

◆ ~ModelCompiler()

CelanturSDK::ModelCompiler::~ModelCompiler ( )

Member Function Documentation

◆ compile_model()

void CelanturSDK::ModelCompiler::compile_model ( celantur::InferenceEnginePluginCompileSettings  settings,
std::filesystem::path  output_path 
)

Compile the preloaded model with the given settings and save it to the given output path;.

Parameters
settingsThe settings that are needed to compile the model; these settings are obtained from the preload_model() function.
output_pathThe path to save the compiled model; the path should include the file name and extension.

The output path should be a valid path to save the compiled model; we recommend file extension based on the inference engine plugin used:

  • TensorRT: .trt.enc
  • OpenVINO: .ovino.enc
  • ONNX: .onnx.enc But it is not required; you can use any extension you want. The tests we use test for these extensions.

◆ operator=() [1/2]

CelanturSDK::ModelCompiler & CelanturSDK::ModelCompiler::operator= ( const ModelCompiler other)

◆ operator=() [2/2]

CelanturSDK::ModelCompiler & CelanturSDK::ModelCompiler::operator= ( CelanturSDK::ModelCompiler &&  other)
noexcept

◆ preload_model()

celantur::InferenceEnginePluginCompileSettings CelanturSDK::ModelCompiler::preload_model ( std::filesystem::path  model_path)

Preload the model from the given path and return the settings that are needed to compile the model;.

The preloading is needed to extract the model specific settings that are needed to compile the model for the given inference engine plugin; it is impossible to determine the settings purely from the inference engine For more abour inference engines and their settings, please read the documentation: Inference Engines

Parameters
model_pathPath to the model file
Returns
celantur::InferenceEnginePluginCompileSettings The settings that are needed to compile the model