| Commit message (Collapse) | Author | Age | Files |
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| |/ |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I ran into an issue when first using this plugin where the
print_info_message function wasn't working correctly due to vim
misinterpreting the <Esc> sequence in `vim.command("normal \\<Esc>")` as
a series of individual characters rather than a single literal Escape
character. This resulted in the characters 'c>' being inserted into the
active buffer at the cursor location because the 's' in '<Esc>' was
being interpreted as a normal mode 's', causing it to enter insert mode,
and none of the info messages were being echoed properly. This was
frustrating as it was not easy to figure out why my commands weren't
working initially (turns out I hadn't configured my billing plan
correctly, d'oh).
Fix this by using a more robust way of sending the <Esc> character to
vim via `vim.command('call feedkeys("\<Esc>")')`.
The usage of double quotes inside the feedkeys() call is important
because it causes vim to treat the sequence as a proper escape sequence
rather than a series of individual characters (see :h feedkeys).
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The application was restricted to loading role configurations only from
a predefined config file, which limited extensibility.
Enable dynamic role configuration by invoking a custom Vim function if
it is defined. This allows users to extend the role configurations
beyond the static file.
diff --git a/doc/vim-ai.txt b/doc/vim-ai.txt:
-The roles in g:vim_ai_roles_config_file are converted to a Vim dictionary.
-Optionally, additional roles can be added by defining a function VimAIRoleParser()
-whose output is a dictionary of the same format as g:vim_ai_roles_config_file.
-
diff --git a/py/roles.py b/py/roles.py:
-if vim.eval('exists("*VimAIRoleParser")'):
- roles.update(vim.eval('VimAIRoleParser()'))
-
diff --git a/py/utils.py b/py/utils.py:
- if vim.eval('exists("*VimAIRoleParser")'):
- roles.update(vim.eval('VimAIRoleParser()'))
-
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Files may be included in the chat by a special `include` role. Each
file's contents will be added to an additional `user` role message with
the files separated by `==> {path} <==` where `{path}` is the path to
the file. Globbing is expanded out via `glob.glob` and relative apths to
the current working directory (as determined by `getcwd()`) will be
resolved to absolute paths.
Example:
```
>>> user
Generate documentation for the following files
>>> include
/home/user/myproject/src/../requirements.txt
/home/user/myproject/**/*.py
```
Fixes: #69
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For example, you can start llama-cpp-python like this (it emulates
the openai api):
```sh
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]'
wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf
python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf
```
Then set the API url in your `.vimrc`:
```vim
let g:vim_ai_chat = {
\ "engine": "chat",
\ "options": {
\ "base_url": "http://127.0.0.1:8000",
\ },
\ }
```
And chat with the locally hosted AI using `:AIChat`.
The change in utils.py was needed because llama-cpp-python adds a new
line to the final response: `[DONE]^M`.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
| |
Fixes https://github.com/madox2/vim-ai/issues/14
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|