| Commit message (Collapse) | Author | Files | ||
|---|---|---|---|---|
| 2025-01-31 | chore: rebase and fix up conflicts | Max Resnick | 1 | |
| 2024-12-22 | added image generation | Martin Bielik | 1 | |
| 2024-12-21 | fix test: sorting glob output | Martin Bielik | 1 | |
| 2024-12-21 | image to text support, closes #134 | Martin Bielik | 1 | |
| 2024-12-21 | parse chat messages tests | Martin Bielik | 1 | |
| 2024-12-17 | special role | Martin Bielik | 1 | |
| 2024-12-17 | introduced pre-defined default roles | Martin Bielik | 1 | |
| 2024-12-16 | refactoring: import python when needed, run as functions | Martin Bielik | 1 | |
| 2024-12-15 | refactoring: make prompt in python | Martin Bielik | 1 | |
| 2024-12-15 | unified config parsing + tests | Martin Bielik | 1 | |
| 2024-12-12 | fixed roles parsingmain | Martin Bielik | 1 | |
| 2024-12-12 | fixed complete command roles after refactoring | Martin Bielik | 1 | |
| 2024-12-12 | execute mutliple roles | Martin Bielik | 1 | |
| 2024-12-11 | fix(utils): improve response mapping | Jason Kölker | 1 | |
| Make the reponse mapping more robust by checking for an empty (or missing) `choices` list and substitute a list with an empty dictionary. Use `.get` to access the `message` or `delta` object again returning an empty dictionary if they are not found. When using `hermes3-405b` on lambda cloud's inference (based on openrouter) a final response was returned with an empty list for choices causing a traceback on completion. Debug log: ``` [2024-12-11 19:49:11.925592] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': ' today'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 18, 'total_tokens': 58, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:11.975457] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': '?'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.008987] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {}, 'finish_reason': 'stop', 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 20, 'total_tokens': 60, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.009400] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} ``` | ||||
| 2024-12-07 | fixed options normalization | Martin Bielik | 1 | |
| 2024-12-07 | fix: grok xai blocks without user agent | Dru Jensen | 1 | |
| 2024-12-07 | fix debug logging without arguments | Martin Bielik | 1 | |
| 2024-12-06 | allow override global token config | Martin Bielik | 1 | |
| 2024-12-05 | fixed stream=0 in chat engine | Martin Bielik | 1 | |
| 2024-12-05 | escaping error message | Martin Bielik | 1 | |
| 2024-12-03 | o1 support - max_completion_tokens | Martin Bielik | 1 | |
| 2024-11-08 | improved error handling, fixes #126 | Martin Bielik | 1 | |
| 2024-10-08 | support non streaming api | Martin Bielik | 1 | |
| 2024-06-11 | fixes #110, python compatibility issue with escape sequence | Martin Bielik | 1 | |
| 2024-06-04 | Fix print_info_message <Esc> issue | Michael Buckley | 1 | |
| I ran into an issue when first using this plugin where the print_info_message function wasn't working correctly due to vim misinterpreting the <Esc> sequence in `vim.command("normal \\<Esc>")` as a series of individual characters rather than a single literal Escape character. This resulted in the characters 'c>' being inserted into the active buffer at the cursor location because the 's' in '<Esc>' was being interpreted as a normal mode 's', causing it to enter insert mode, and none of the info messages were being echoed properly. This was frustrating as it was not easy to figure out why my commands weren't working initially (turns out I hadn't configured my billing plan correctly, d'oh). Fix this by using a more robust way of sending the <Esc> character to vim via `vim.command('call feedkeys("\<Esc>")')`. The usage of double quotes inside the feedkeys() call is important because it causes vim to treat the sequence as a proper escape sequence rather than a series of individual characters (see :h feedkeys). | ||||
| 2024-03-24 | reusing parsing code | Martin Bielik | 1 | |
| 2024-03-11 | optionally supplement roles dict by vim function source | Konfekt | 1 | |
| The application was restricted to loading role configurations only from a predefined config file, which limited extensibility. Enable dynamic role configuration by invoking a custom Vim function if it is defined. This allows users to extend the role configurations beyond the static file. diff --git a/doc/vim-ai.txt b/doc/vim-ai.txt: -The roles in g:vim_ai_roles_config_file are converted to a Vim dictionary. -Optionally, additional roles can be added by defining a function VimAIRoleParser() -whose output is a dictionary of the same format as g:vim_ai_roles_config_file. - diff --git a/py/roles.py b/py/roles.py: -if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) - diff --git a/py/utils.py b/py/utils.py: - if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) - | ||||
| 2024-03-09 | supprot config only roles | Martin Bielik | 1 | |
| 2024-03-09 | simple error handling | Martin Bielik | 1 | |
| 2024-03-09 | roles example file | Martin Bielik | 1 | |
| 2024-03-09 | parse role options | Martin Bielik | 1 | |
| 2024-03-09 | read role prompt from config | Martin Bielik | 1 | |
| 2024-03-09 | removed config path log | Martin Bielik | 1 | |
| 2024-03-08 | feat: add an option to customize api key file location | jiangyinzuo | 1 | |
| 2024-01-24 | feat(chat): add `include` role to include files | Jason Kölker | 1 | |
| Files may be included in the chat by a special `include` role. Each file's contents will be added to an additional `user` role message with the files separated by `==> {path} <==` where `{path}` is the path to the file. Globbing is expanded out via `glob.glob` and relative apths to the current working directory (as determined by `getcwd()`) will be resolved to absolute paths. Example: ``` >>> user Generate documentation for the following files >>> include /home/user/myproject/src/../requirements.txt /home/user/myproject/**/*.py ``` Fixes: #69 | ||||
| 2023-12-02 | added explaining comment | Martin Bielik | 1 | |
| 2023-12-02 | fix selection include extra content when the user is in visual mode | cposture | 1 | |
| 2023-11-01 | fixed python3.12 slash escaping, fixes #61 | Martin Bielik | 1 | |
| 2023-10-21 | option to disable authorization | Martin Bielik | 1 | |
| 2023-09-18 | Add support for base_url option to use local models | juodumas | 1 | |
| For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`. | ||||
| 2023-09-09 | Include OpenAI Org ID from the token config | Duy Lam | 1 | |
| 2023-06-25 | allow string in initial_prompt, closes #35 | Martin Bielik | 1 | |
| 2023-06-11 | optional max_tokens, fixes #42 | Martin Bielik | 1 | |
| 2023-05-23 | importing vim module, fixes #43 | Martin Bielik | 1 | |
| 2023-05-19 | print error in debug | Martin Bielik | 1 | |
| 2023-05-14 | clear echo message after completion | Martin Bielik | 1 | |
| 2023-05-04 | Allow single undo | BonaBeavis | 1 | |
| Fixes https://github.com/madox2/vim-ai/issues/14 | ||||
| 2023-04-26 | http error handling | Martin Bielik | 1 | |
| 2023-04-22 | recover for unfinished chat | Martin Bielik | 1 | |
| 2023-04-18 | empty message warning, reference #20 | Martin Bielik | 1 | |