gptel
- Description
- Interact with ChatGPT or other LLMs
- Latest
- gptel-0.9.0.0.20240910.170706.tar (.sig), 2024-Sep-11, 320 KiB
- Maintainer
- Karthik Chikmagalur <karthik.chikmagalur@gmail.com>
- Website
- https://github.com/karthink/gptel
- Browse ELPA's repository
- CGit or Gitweb
- Badge
To install this package from Emacs, use package-install
or list-packages
.
Full description
gptel is a simple Large Language Model chat client, with support for multiple models/backends. gptel supports - The services ChatGPT, Azure, Gemini, Anthropic AI, Anyscale, Together.ai, Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras and Kagi (FastGPT & Summarizer) - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All Additionally, any LLM service (local or remote) that provides an OpenAI-compatible API is supported. Features: - It’s async and fast, streams responses. - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever) - LLM responses are in Markdown or Org markup. - Supports conversations and multiple independent sessions. - Save chats as regular Markdown/Org/Text files and resume them later. - You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. Requirements for ChatGPT, Azure, Gemini or Kagi: - You need an appropriate API key. Set the variable `gptel-api-key' to the key or to a function of no arguments that returns the key. (It tries to use `auth-source' by default) ChatGPT is configured out of the box. For the other sources: - For Azure: define a gptel-backend with `gptel-make-azure', which see. - For Gemini: define a gptel-backend with `gptel-make-gemini', which see. - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic', which see - For Together.ai, Anyscale, Perplexity, Groq, OpenRouter, DeepSeek or Cerebras: define a gptel-backend with `gptel-make-openai', which see. - For PrivateGPT: define a backend with `gptel-make-privategpt', which see. - For Kagi: define a gptel-backend with `gptel-make-kagi', which see. For local models using Ollama, Llama.cpp or GPT4All: - The model has to be running on an accessible address (or localhost) - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all', which see. - Llama.cpp or Llamafiles: Define a gptel-backend with `gptel-make-openai', Consult the package README for examples and more help with configuring backends. Usage: gptel can be used in any buffer or in a dedicated chat buffer. The interaction model is simple: Type in a query and the response will be inserted below. You can continue the conversation by typing below the response. To use this in any buffer: - Call `gptel-send' to send the buffer's text up to the cursor. Select a region to send only the region. - You can select previous prompts and responses to continue the conversation. - Call `gptel-send' with a prefix argument to access a menu where you can set your backend, model and other parameters, or to redirect the prompt/response. To use this in a dedicated buffer: - M-x gptel: Start a chat session - In the chat session: Press `C-c RET' (`gptel-send') to send your prompt. Use a prefix argument (`C-u C-c RET') to access a menu. In this menu you can set chat parameters like the system directives, active backend or model, or choose to redirect the input or output elsewhere (such as to the kill ring). - You can save this buffer to a file. When opening this file, turn on `gptel-mode' before editing it to restore the conversation state and continue chatting. Include more context with requests: If you want to provide the LLM with more context, you can add arbitrary regions, buffers or files to the query with `gptel-add'. (Call `gptel-add' in Dired or use the dedicated `gptel-add-file' to add files.) You can also add context from gptel's menu instead (gptel-send with a prefix arg), as well as examine or modify context. When context is available, gptel will include it with each LLM query. gptel in Org mode: gptel offers a few extra conveniences in Org mode. - You can limit the conversation context to an Org heading with `gptel-org-set-topic'. - You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. See the variable `gptel-org-branching-context'. - You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command `gptel-org-set-properties'. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks. Finally, gptel offers a general purpose API for writing LLM ineractions that suit your workflow, see `gptel-request'.
Old versions
gptel-0.9.0.0.20240910.5210.tar.lz | 2024-Sep-10 | 60.7 KiB |
gptel-0.9.0.0.20240907.192506.tar.lz | 2024-Sep-08 | 60.2 KiB |
gptel-0.9.0.0.20240905.124650.tar.lz | 2024-Sep-05 | 60.0 KiB |
gptel-0.9.0.0.20240901.222432.tar.lz | 2024-Sep-02 | 58.1 KiB |
gptel-0.9.0.0.20240831.191032.tar.lz | 2024-Sep-01 | 58.1 KiB |
gptel-0.9.0.0.20240730.123127.tar.lz | 2024-Jul-30 | 58.0 KiB |
gptel-0.9.0.0.20240626.154646.tar.lz | 2024-Jun-27 | 58.0 KiB |
gptel-0.8.6.0.20240623.113847.tar.lz | 2024-Jun-23 | 57.2 KiB |
gptel-0.8.6.0.20240526.170506.tar.lz | 2024-May-27 | 50.5 KiB |
gptel-0.8.5.0.20240429.130547.tar.lz | 2024-May-01 | 49.1 KiB |