Documentation Index¶
Fetch the complete documentation index at: https://makakoo-traylinx-35.mintlify.app/llms.txt Use this file to discover all available pages before exploring further.
LLM Chat¶
Chat directly with AI language models through TrayLinx's multi-model interface.
LLM Chat is a direct chat interface that connects to your project's active AI models through the TrayLinx LLM proxy. Unlike the Personal Assistant — which is a context-aware AI helper — LLM Chat gives you raw, one-shot access to any model available in your project, making it useful for prompt iteration, model comparison, and debugging.
Multi-model chat overview¶
LLM Chat fetches the list of active chat models registered in your project from the organization's model registry. Every model the LLM proxy has marked as active and mode=chat appears in the model selector.
Responses are returned as plain text and rendered with whitespace preserved, so you can inspect raw model output without Markdown processing.
Selecting a model¶
Direct API integration with model providers¶
All model calls in LLM Chat route through the TrayLinx LLM proxy (REACT_APP_OPENAI_API_BASE). The proxy:
- Authenticates requests using your project's secret key as the OpenAI-compatible API key
- Forwards the
openai-organizationheader to identify the tenant - Returns responses in the OpenAI chat completion format
This means any model the proxy exposes — including third-party models from Anthropic, Google, or others — appears in LLM Chat without additional configuration, as long as it is registered and active in your organization.
Conversation history¶
LLM Chat maintains a single conversation per session. The interface stores the exchange between your prompts and model responses in-memory for the duration of the page session.
The LLMService that powers LLM Chat supports passing conversation history to the model for multi-turn conversations. Messages in the current session are appended as HumanMessage objects in a LangChain chain, giving the model context from earlier in the conversation.
Using prompt templates¶
LLM Chat integrates with TrayLinx's prompt template system. When you have prompt template assets in your project (created in Studio Tools as prompt_template subtype assets), you can reference them to pre-populate the prompt field with a known template.
Templates let you standardize inputs across repeated tasks — for example, a fixed evaluation rubric or a structured data extraction prompt — so you can change only the variable parts between runs.
Response analysis¶
After the model responds, the response text appears in the Response card below the prompt input. The response is rendered with whiteSpace: pre-wrap so line breaks and indentation in the model output are preserved exactly.
To analyze or compare responses:
- Copy the response text manually for comparison across models.
- Switch the model in the selector and re-submit the same prompt to compare outputs side by side.
- Use the prompt field as a scratchpad — modify the prompt and re-generate to iterate on wording.
Built with Mintlify.