summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFiles
* refactoring: import python when needed, run as functionsMartin Bielik2024-12-167
|
* refactoring: make prompt in pythonMartin Bielik2024-12-157
|
* unified config parsing + testsMartin Bielik2024-12-1510
|
* fixed roles parsingmainMartin Bielik2024-12-121
|
* fixed complete command roles after refactoringMartin Bielik2024-12-123
|
* execute mutliple rolesMartin Bielik2024-12-123
|
* Merge pull request #139 from jkoelker/improve_respMartin Bielik2024-12-121
|\ | | | | fix(utils): improve response mapping
| * fix(utils): improve response mappingJason Kölker2024-12-111
|/ | | | | | | | | | | | | | | | | | | Make the reponse mapping more robust by checking for an empty (or missing) `choices` list and substitute a list with an empty dictionary. Use `.get` to access the `message` or `delta` object again returning an empty dictionary if they are not found. When using `hermes3-405b` on lambda cloud's inference (based on openrouter) a final response was returned with an empty list for choices causing a traceback on completion. Debug log: ``` [2024-12-11 19:49:11.925592] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': ' today'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 18, 'total_tokens': 58, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:11.975457] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {'content': '?'}, 'finish_reason': None, 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.008987] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [{'index': 0, 'delta': {}, 'finish_reason': 'stop', 'content_filter_results': {'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}}}], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 20, 'total_tokens': 60, 'prompt_tokens_details': None, 'completion_tokens_details': None}} [2024-12-11 19:49:12.009400] [engine-chat] response: {'id': 'chatcmpl-140a7a938d2149c8a750f47af6a11be1', 'object': 'chat.completion.chunk', 'created': 1733946550, 'model': 'hermes3-405b', 'choices': [], 'system_fingerprint': '', 'usage': {'prompt_tokens': 40, 'completion_tokens': 19, 'total_tokens': 59, 'prompt_tokens_details': None, 'completion_tokens_details': None}} ```
* allow passing single line rangeMartin Bielik2024-12-101
|
* don't include not selected line, refactor ranges, fixes #112Martin Bielik2024-12-084
|
* print prompt in debug modeMartin Bielik2024-12-082
|
* Merge pull request #136 from drujensen/feature/fix-grok-xaiMartin Bielik2024-12-071
|\ | | | | fix: grok xai blocks without user agent, fixes #136 104
| * fix: grok xai blocks without user agentDru Jensen2024-12-071
| |
* | fixed options normalizationMartin Bielik2024-12-073
|/
* improved initial message configMartin Bielik2024-12-073
|
* fix debug logging without argumentsMartin Bielik2024-12-071
|
* improved openrouter guideMartin Bielik2024-12-061
|
* docu: custom apis, openrouter guideMartin Bielik2024-12-061
|
* allow override global token configMartin Bielik2024-12-064
|
* o1 role exampleMartin Bielik2024-12-052
|
* fixed stream=0 in chat engineMartin Bielik2024-12-053
|
* updated edit configuration docuMartin Bielik2024-12-051
|
* o1 preview exampleMartin Bielik2024-12-051
|
* escaping error messageMartin Bielik2024-12-051
|
* moving from legacy completions apiMartin Bielik2024-12-053
|
* docu new optionsMartin Bielik2024-12-034
|
* o1 support - max_completion_tokensMartin Bielik2024-12-032
|
* Merge branch 'main' into support-non-streamingMartin Bielik2024-12-031
|\
| * improved error handling, fixes #126Martin Bielik2024-11-081
| |
* | support non streaming apiMartin Bielik2024-10-083
|/
* Merge pull request #119 from eltociear/patch-1Martin Bielik2024-09-121
|\ | | | | docs: update README.md
| * docs: update README.mdIkko Eltociear Ashimine2024-09-121
|/ | | seleciton -> selection
* Merge pull request #113 from madox2/keep-open-mode-multiple-chatsMartin Bielik2024-09-112
|\ | | | | allow multiple chats in keep open mode
| * remove useless buffer in keep open modeMartin Bielik2024-07-291
| |
| * private MakeScratchWindow functionMartin Bielik2024-07-251
| |
| * switch to last created buffer in keep open modeMartin Bielik2024-07-241
| |
| * allow multiple chats in keep open modeMartin Bielik2024-07-232
|/
* fixes #110, python compatibility issue with escape sequenceMartin Bielik2024-06-111
|
* added abort flag to the plugin functionsMartin Bielik2024-06-091
|
* handle paste mode in finally blockMartin Bielik2024-06-091
|
* Merge pull request #108 from misterbuckley/fix-print-info-messageMartin Bielik2024-06-061
|\ | | | | Fix print_info_message <Esc> issue
| * Fix print_info_message <Esc> issueMichael Buckley2024-06-041
|/ | | | | | | | | | | | | | | | | | | | | I ran into an issue when first using this plugin where the print_info_message function wasn't working correctly due to vim misinterpreting the <Esc> sequence in `vim.command("normal \\<Esc>")` as a series of individual characters rather than a single literal Escape character. This resulted in the characters 'c>' being inserted into the active buffer at the cursor location because the 's' in '<Esc>' was being interpreted as a normal mode 's', causing it to enter insert mode, and none of the info messages were being echoed properly. This was frustrating as it was not easy to figure out why my commands weren't working initially (turns out I hadn't configured my billing plan correctly, d'oh). Fix this by using a more robust way of sending the <Esc> character to vim via `vim.command('call feedkeys("\<Esc>")')`. The usage of double quotes inside the feedkeys() call is important because it causes vim to treat the sequence as a proper escape sequence rather than a series of individual characters (see :h feedkeys).
* gpt-4o as default chat modelMartin Bielik2024-05-313
|
* define required max_tokens for turbo-instruct modelMartin Bielik2024-05-313
|
* Merge pull request #99 from Konfekt/patch-3Martin Bielik2024-05-163
|\ | | | | increase token limit
| * increase token limitEnno2024-04-143
|/
* Merge pull request #88 from Konfekt/tabwinMartin Bielik2024-03-241
|\ | | | | detect chat window in other tabs as well
| * refactoring: extracted to helper function, using guardsMartin Bielik2024-03-241
| |
| * retabMartin Bielik2024-03-241
| |
| * Implement smarter AI chat window detection to reuse existing AI chat windows.Konfekt2024-03-111
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First search for AI chat windows within the same tab and then within other tabs if none are found in the current tab. It now prioritizes the reusing of an existing chat window that matches the '.aichat' filetype before considering opening a new one. If there are no existing AI chat windows, the plugin will open a new chat window as a last resort. diff --git a/autoload/vim_ai.vim b/autoload/vim_ai.vim: - " reuse chat in active window or tab + " TODO: look for first active chat buffer. If .aichat file is used, + " then reuse chat in active window - " allow .aichat files windows to be switched to, preferably on same tab - let buffer_list_tab = tabpagebuflist(tabpagenr()) - let buffer_list_tab = filter(buffer_list_tab, 'getbufvar(v:val, "&filetype") ==# "aichat"') - - let buffer_list = [] - for i in range(tabpagenr('$')) - call extend(buffer_list, tabpagebuflist(i + 1)) - endfor - let buffer_list = filter(buffer_list, 'getbufvar(v:val, "&filetype") ==# "aichat"') - - if len(buffer_list_tab) > 0 - call win_gotoid(win_findbuf(buffer_list_tab[0])[0]) - elseif len(buffer_list) > 0 - call win_gotoid(win_findbuf(buffer_list[0])[0]) - else - " open new chat window - let l:open_conf = l:config['ui']['open_chat_command'] - call s:OpenChatWindow(l:open_conf) - endif + " open new chat window + let l:open_conf = l:config['ui']['open_chat_command'] + call s:OpenChatWindow(l:open_conf)