summaryrefslogtreecommitdiff
path: root/py (unfollow)
Commit message (Collapse)AuthorFiles
2024-12-12fixed roles parsingmainMartin Bielik1
2024-12-12fixed complete command roles after refactoringMartin Bielik3
2024-12-12execute mutliple rolesMartin Bielik1
2024-12-11fix(utils): improve response mappingJason Kölker1
Make the reponse mapping more robust by checking for an empty (or missing) `choices` list and substitute a list with an empty dictionary. Use `.get` to access the `message` or `delta` object again returning an empty dictionary if they are not found. When using `hermes3-405b` on lambda cloud's inference (based on openrouter) a final response was returned with an empty list for choices causing a traceback on completion. Debug log: ``` [2024-12-11 19:49:11.925592] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': ' today'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 18, 'total_tokens': 58, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:11.975457] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': '?'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.008987] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {}, 'finish_reason': 'stop', 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 20, 'total_tokens': 60, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.009400] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} ```
2024-12-08print prompt in debug modeMartin Bielik2
2024-12-07fixed options normalizationMartin Bielik3
2024-12-07fix: grok xai blocks without user agentDru Jensen1
2024-12-07fix debug logging without argumentsMartin Bielik1
2024-12-06allow override global token configMartin Bielik1
2024-12-05fixed stream=0 in chat engineMartin Bielik3
2024-12-05escaping error messageMartin Bielik1
2024-12-03o1 support - max_completion_tokensMartin Bielik1
2024-11-08improved error handling, fixes #126Martin Bielik1
2024-10-08support non streaming apiMartin Bielik3
2024-06-11fixes #110, python compatibility issue with escape sequenceMartin Bielik1
2024-06-04Fix print_info_message <Esc> issueMichael Buckley1
I ran into an issue when first using this plugin where the print_info_message function wasn't working correctly due to vim misinterpreting the <Esc> sequence in `vim.command("normal \\<Esc>")` as a series of individual characters rather than a single literal Escape character. This resulted in the characters 'c>' being inserted into the active buffer at the cursor location because the 's' in '<Esc>' was being interpreted as a normal mode 's', causing it to enter insert mode, and none of the info messages were being echoed properly. This was frustrating as it was not easy to figure out why my commands weren't working initially (turns out I hadn't configured my billing plan correctly, d'oh). Fix this by using a more robust way of sending the <Esc> character to vim via `vim.command('call feedkeys("\<Esc>")')`. The usage of double quotes inside the feedkeys() call is important because it causes vim to treat the sequence as a proper escape sequence rather than a series of individual characters (see :h feedkeys).
2024-03-24reusing parsing codeMartin Bielik2
2024-03-11optionally supplement roles dict by vim function sourceKonfekt2
The application was restricted to loading role configurations only from a predefined config file, which limited extensibility. Enable dynamic role configuration by invoking a custom Vim function if it is defined. This allows users to extend the role configurations beyond the static file. diff --git a/doc/vim-ai.txt b/doc/vim-ai.txt: -The roles in g:vim_ai_roles_config_file are converted to a Vim dictionary. -Optionally, additional roles can be added by defining a function VimAIRoleParser() -whose output is a dictionary of the same format as g:vim_ai_roles_config_file. - diff --git a/py/roles.py b/py/roles.py: -if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) - diff --git a/py/utils.py b/py/utils.py: - if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) -
2024-03-10Ensure role config file exists before loading to prevent errorsKonfekt1
The problem was that the application tried to load a roles configuration file without checking whether it actually exists, potentially leading to unhandled exceptions if the file is missing. Ensure that the roles configuration file exists before attempting to read from it; raise an exception with a clear message if the file is not found. diff --git a/py/roles.py b/py/roles.py: -if not os.path.exists(roles_config_path): - raise Exception(f"Role config file does not exist: {roles_config_path}") -
2024-03-09supprot config only rolesMartin Bielik1
2024-03-09simple error handlingMartin Bielik1
2024-03-09fix using role in existing chatMartin Bielik1
2024-03-09roles example fileMartin Bielik2
2024-03-09roles completionMartin Bielik1
2024-03-09parse role optionsMartin Bielik3
2024-03-09read role prompt from configMartin Bielik3
2024-03-09removed config path logMartin Bielik1
2024-03-08feat: add an option to customize api key file locationjiangyinzuo1
2024-01-24feat(chat): add `include` role to include filesJason Kölker1
Files may be included in the chat by a special `include` role. Each file's contents will be added to an additional `user` role message with the files separated by `==> {path} <==` where `{path}` is the path to the file. Globbing is expanded out via `glob.glob` and relative apths to the current working directory (as determined by `getcwd()`) will be resolved to absolute paths. Example: ``` >>> user Generate documentation for the following files >>> include /home/user/myproject/src/../requirements.txt /home/user/myproject/**/*.py ``` Fixes: #69
2023-12-23import vim before utils, fixes #43Martin Bielik2
2023-12-02added explaining commentMartin Bielik1
2023-12-02fix selection include extra content when the user is in visual modecposture3
2023-11-01fixed python3.12 slash escaping, fixes #61Martin Bielik1
2023-10-21removed unused importMartin Bielik2
2023-10-21endpoint_url configMartin Bielik2
2023-10-21option to disable authorizationMartin Bielik1
2023-10-21base_url extracted to config, docuMartin Bielik2
2023-09-18Add support for base_url option to use local modelsjuodumas3
For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
2023-09-09Include OpenAI Org ID from the token configDuy Lam1
2023-06-25allow string in initial_prompt, closes #35Martin Bielik3
2023-06-11optional max_tokens, fixes #42Martin Bielik1
2023-05-23importing vim module, fixes #43Martin Bielik1
2023-05-19print error in debugMartin Bielik3
2023-05-14clear echo message after completionMartin Bielik3
2023-05-04Allow single undoBonaBeavis1
Fixes https://github.com/madox2/vim-ai/issues/14
2023-04-26http error handlingMartin Bielik1
2023-04-22pass config as a parameterMartin Bielik2
2023-04-22recover for unfinished chatMartin Bielik2
2023-04-21move prompt to pythonMartin Bielik1
2023-04-18empty message warning, reference #20Martin Bielik1