Overview
Integration details
| Class | Package | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|
ChatOCIGenAI | langchain-oci | beta | ❌ |
Model features
| Tool calling | Structured output | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ (Gemini) | ✅ (Gemini) | ✅ | ✅ | ✅ | ❌ |
Setup
Installation
Credentials
Set up authentication with the OCI CLI (creates~/.oci/config):
Instantiation
model_id- The model to use (see available models)service_endpoint- Regional endpoint (us-chicago-1,eu-frankfurt-1, etc.)compartment_id- Your OCI compartment OCIDmodel_kwargs- Model settings like temperature, max_tokens
Invocation
Streaming
Get responses as they’re generated:Async
Process multiple requests concurrently for better throughput:Tool Calling
Give models access to APIs, databases, and custom functions:Structured Output
Parse unstructured text into typed data structures for processing:Vision & Multimodal
Process images for data extraction, analysis, and automation:Gemini Multimodal (PDF, Video, Audio)
Process documents, videos, and audio with Gemini models:Configuration
Control model behavior withmodel_kwargs:
Available Models
| Provider | Example Models | Key Features |
|---|---|---|
| Meta | Llama 3.2/3.3/4 (Scout, Maverick) | Vision, parallel tools |
| Gemini 2.0/2.5 Flash, Pro | PDF, video, audio | |
| xAI | Grok 3, Grok 4 | Vision, reasoning |
| Cohere | Command R+, Command A | RAG, vision |
API Reference
For detailed documentation of allChatOCIGenAI features and configurations, head to the API reference.
Related
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

