llama_api_client

Llama API Client - Integrates with Ollama for local AI content generation

Functions

generate_sample_ai_content()

Generate sample AI content for testing purposes

load_dotenv([dotenv_path, stream, verbose, ...])

Parse a .env file and then load all the variables found as environment variables.

Classes

AIContentRequest(content_type, ...[, ...])

Request structure for AI content generation

AIContentResponse(content_type, ...[, metadata])

Response structure for AI-generated content

BaseAIClient([base_dir])

Abstract base class for AI content generation clients

ContentType(value)

Types of AI-generated content

LlamaAPIClient([base_dir])

Client for generating content using Ollama (local Llama models)

Path(*args, **kwargs)

PurePath subclass that can make system calls.

Exceptions

AIContentError

Exception raised when AI content generation fails

AIProviderError

Exception raised when AI provider is not available or configured incorrectly

class llama_api_client.LlamaAPIClient(base_dir='.')[source]

Client for generating content using Ollama (local Llama models)

__init__(base_dir='.')[source]
is_available()[source]

Check if Ollama is available and configured

Return type:

bool

get_model_name()[source]

Get the specific Llama model name

Return type:

str

generate_content(request)[source]

Generate AI content using Ollama

Return type:

AIContentResponse

generate_all_cover_letter_content(job_description, profile_content, company_name, position_title)[source]

Generate all cover letter content variables at once

Return type:

Dict[str, str]

extract_company_and_position(job_description)[source]

Extract company name and position title from job description

Return type:

Dict[str, str]

get_available_models()[source]

Get list of available Ollama models

Return type:

list

install_model(model_name=None)[source]

Install/pull a model in Ollama

Return type:

bool