Skip to content

Documentation Index

Fetch the complete documentation index at: https://makakoo-traylinx-35.mintlify.app/llms.txt Use this file to discover all available pages before exploring further.

LLM Chat

Chat directly with AI language models through TrayLinx's multi-model interface.

LLM Chat is a direct chat interface that connects to your project's active AI models through the TrayLinx LLM proxy. Unlike the Personal Assistant — which is a context-aware AI helper — LLM Chat gives you raw, one-shot access to any model available in your project, making it useful for prompt iteration, model comparison, and debugging.

Multi-model chat overview

LLM Chat fetches the list of active chat models registered in your project from the organization's model registry. Every model the LLM proxy has marked as active and mode=chat appears in the model selector.

Responses are returned as plain text and rendered with whitespace preserved, so you can inspect raw model output without Markdown processing.

Selecting a model

Navigate to LLM Chat from the sidebar (or via the dashboard breadcrumb at Dashboard → LLM Chat).

Click the Select Model dropdown. The list shows every active chat model your project has access to, identified by name. The first available model is selected by default.

Type your message in the prompt field and click Generate (or press the generate button). The interface is disabled if no models are available.

LLM Chat requires a project to be active and that project must have a valid secret key stored. If you see a "Project API key not found" error, navigate to your project's Settings → API Keys and ensure a key has been created.

Direct API integration with model providers

All model calls in LLM Chat route through the TrayLinx LLM proxy (REACT_APP_OPENAI_API_BASE). The proxy:

  • Authenticates requests using your project's secret key as the OpenAI-compatible API key
  • Forwards the openai-organization header to identify the tenant
  • Returns responses in the OpenAI chat completion format

This means any model the proxy exposes — including third-party models from Anthropic, Google, or others — appears in LLM Chat without additional configuration, as long as it is registered and active in your organization.

Conversation history

LLM Chat maintains a single conversation per session. The interface stores the exchange between your prompts and model responses in-memory for the duration of the page session.

The LLMService that powers LLM Chat supports passing conversation history to the model for multi-turn conversations. Messages in the current session are appended as HumanMessage objects in a LangChain chain, giving the model context from earlier in the conversation.

Conversation history in LLM Chat is not persisted across page reloads. For persistent, searchable conversation history, use the Personal Assistant.

Using prompt templates

LLM Chat integrates with TrayLinx's prompt template system. When you have prompt template assets in your project (created in Studio Tools as prompt_template subtype assets), you can reference them to pre-populate the prompt field with a known template.

Templates let you standardize inputs across repeated tasks — for example, a fixed evaluation rubric or a structured data extraction prompt — so you can change only the variable parts between runs.

Response analysis

After the model responds, the response text appears in the Response card below the prompt input. The response is rendered with whiteSpace: pre-wrap so line breaks and indentation in the model output are preserved exactly.

To analyze or compare responses:

  1. Copy the response text manually for comparison across models.
  2. Switch the model in the selector and re-submit the same prompt to compare outputs side by side.
  3. Use the prompt field as a scratchpad — modify the prompt and re-generate to iterate on wording.

Use LLM Chat to quickly test a prompt before building it into an agent or dataset generator in Studio Tools. Once the prompt is stable, copy it into a Prompt Template asset for reuse.

Built with Mintlify.