LLM Core(s)

For the implementation of LLM core(s), we design the LLMCore as the base class as below that defines several functionalities.

class LLMCore(ABC):
    def __init__(self,
                 llm_name: str,
                 max_gpu_memory: dict = None,
                 eval_device: str = None,
                 max_new_tokens: int = 256,
                 log_mode: str = "console",
                 use_context_manager: bool = False):
        """  Initialize LLMCore with model configurations
        """
        pass

    @abstractmethod
    def load_llm_and_tokenizer(self) -> None:
        """  Load the LLM model and tokenizer
        """
        pass

    def tool_calling_input_format(self, 
                                  prompt: list, 
                                  tools: list) -> list:
        """  Format prompts to include tool information
        """
        pass

    def parse_tool_calls(self, tool_call_str):
        """  Parse and add tool call identifiers for models without tool support
        """
        pass

    @abstractmethod
    def address_syscall(self, 
                        llm_syscall, 
                        temperature=0.0):
        """ Process the syscall sent to the LLM
        """
        pass

    @abstractmethod
    def llm_generate(self, 
                     prompt, 
                     temperature=0.0):
        """  Generate a response based on the provided prompt
        """
        pass

Different instances are implemented by inheriting this LLMCore class and override its abstract methods.

Last updated