summaryrefslogtreecommitdiff
path: root/py/utils.py (follow)
Commit message (Collapse)AuthorAgeFiles
* added image generationMartin Bielik2024-12-221
|
* fix test: sorting glob outputMartin Bielik2024-12-211
|
* image to text support, closes #134Martin Bielik2024-12-211
|
* parse chat messages testsMartin Bielik2024-12-211
|
* special roleMartin Bielik2024-12-171
|
* introduced pre-defined default rolesMartin Bielik2024-12-171
|
* refactoring: import python when needed, run as functionsMartin Bielik2024-12-161
|
* refactoring: make prompt in pythonMartin Bielik2024-12-151
|
* unified config parsing + testsMartin Bielik2024-12-151
|
* fixed roles parsingmainMartin Bielik2024-12-121
|
* fixed complete command roles after refactoringMartin Bielik2024-12-121
|
* execute mutliple rolesMartin Bielik2024-12-121
|
* fix(utils): improve response mappingJason Kölker2024-12-111
| | | | | | | | | | | | | | | | | | | Make the reponse mapping more robust by checking for an empty (or missing) `choices` list and substitute a list with an empty dictionary. Use `.get` to access the `message` or `delta` object again returning an empty dictionary if they are not found. When using `hermes3-405b` on lambda cloud's inference (based on openrouter) a final response was returned with an empty list for choices causing a traceback on completion. Debug log: ``` [2024-12-11 19:49:11.925592] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': ' today'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 18, 'total_tokens': 58, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:11.975457] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': '?'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.008987] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {}, 'finish_reason': 'stop', 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 20, 'total_tokens': 60, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.009400] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} ```
* Merge pull request #136 from drujensen/feature/fix-grok-xaiMartin Bielik2024-12-071
|\ | | | | fix: grok xai blocks without user agent, fixes #136 104
| * fix: grok xai blocks without user agentDru Jensen2024-12-071
| |
* | fixed options normalizationMartin Bielik2024-12-071
|/
* fix debug logging without argumentsMartin Bielik2024-12-071
|
* allow override global token configMartin Bielik2024-12-061
|
* fixed stream=0 in chat engineMartin Bielik2024-12-051
|
* escaping error messageMartin Bielik2024-12-051
|
* o1 support - max_completion_tokensMartin Bielik2024-12-031
|
* Merge branch 'main' into support-non-streamingMartin Bielik2024-12-031
|\
| * improved error handling, fixes #126Martin Bielik2024-11-081
| |
* | support non streaming apiMartin Bielik2024-10-081
|/
* fixes #110, python compatibility issue with escape sequenceMartin Bielik2024-06-111
|
* Fix print_info_message <Esc> issueMichael Buckley2024-06-041
| | | | | | | | | | | | | | | | | | | | | I ran into an issue when first using this plugin where the print_info_message function wasn't working correctly due to vim misinterpreting the <Esc> sequence in `vim.command("normal \\<Esc>")` as a series of individual characters rather than a single literal Escape character. This resulted in the characters 'c>' being inserted into the active buffer at the cursor location because the 's' in '<Esc>' was being interpreted as a normal mode 's', causing it to enter insert mode, and none of the info messages were being echoed properly. This was frustrating as it was not easy to figure out why my commands weren't working initially (turns out I hadn't configured my billing plan correctly, d'oh). Fix this by using a more robust way of sending the <Esc> character to vim via `vim.command('call feedkeys("\<Esc>")')`. The usage of double quotes inside the feedkeys() call is important because it causes vim to treat the sequence as a proper escape sequence rather than a series of individual characters (see :h feedkeys).
* reusing parsing codeMartin Bielik2024-03-241
|
* optionally supplement roles dict by vim function sourceKonfekt2024-03-111
| | | | | | | | | | | | | | | | | | | | | | | The application was restricted to loading role configurations only from a predefined config file, which limited extensibility. Enable dynamic role configuration by invoking a custom Vim function if it is defined. This allows users to extend the role configurations beyond the static file. diff --git a/doc/vim-ai.txt b/doc/vim-ai.txt: -The roles in g:vim_ai_roles_config_file are converted to a Vim dictionary. -Optionally, additional roles can be added by defining a function VimAIRoleParser() -whose output is a dictionary of the same format as g:vim_ai_roles_config_file. - diff --git a/py/roles.py b/py/roles.py: -if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) - diff --git a/py/utils.py b/py/utils.py: - if vim.eval('exists("*VimAIRoleParser")'): - roles.update(vim.eval('VimAIRoleParser()')) -
* supprot config only rolesMartin Bielik2024-03-091
|
* simple error handlingMartin Bielik2024-03-091
|
* roles example fileMartin Bielik2024-03-091
|
* parse role optionsMartin Bielik2024-03-091
|
* read role prompt from configMartin Bielik2024-03-091
|
* removed config path logMartin Bielik2024-03-091
|
* feat: add an option to customize api key file locationjiangyinzuo2024-03-081
|
* feat(chat): add `include` role to include filesJason Kölker2024-01-241
| | | | | | | | | | | | | | | | | | | | | | | | | Files may be included in the chat by a special `include` role. Each file's contents will be added to an additional `user` role message with the files separated by `==> {path} <==` where `{path}` is the path to the file. Globbing is expanded out via `glob.glob` and relative apths to the current working directory (as determined by `getcwd()`) will be resolved to absolute paths. Example: ``` >>> user Generate documentation for the following files >>> include /home/user/myproject/src/../requirements.txt /home/user/myproject/**/*.py ``` Fixes: #69
* added explaining commentMartin Bielik2023-12-021
|
* fix selection include extra content when the user is in visual modecposture2023-12-021
|
* fixed python3.12 slash escaping, fixes #61Martin Bielik2023-11-011
|
* Merge remote-tracking branch 'origin/main' into base-url-configMartin Bielik2023-10-211
|\
| * Include OpenAI Org ID from the token configDuy Lam2023-09-091
| |
* | option to disable authorizationMartin Bielik2023-10-211
| |
* | Add support for base_url option to use local modelsjuodumas2023-09-181
|/ | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
* allow string in initial_prompt, closes #35Martin Bielik2023-06-251
|
* optional max_tokens, fixes #42Martin Bielik2023-06-111
|
* importing vim module, fixes #43Martin Bielik2023-05-231
|
* print error in debugMartin Bielik2023-05-191
|
* clear echo message after completionMartin Bielik2023-05-141
|
* Allow single undoBonaBeavis2023-05-041
| | | Fixes https://github.com/madox2/vim-ai/issues/14
* http error handlingMartin Bielik2023-04-261
|