| Commit message (Collapse) | Author | Age | Files | |
|---|---|---|---|---|
| * | Merge remote-tracking branch 'origin/main' into base-url-config | Martin Bielik | 2023-10-21 | 1 |
| |\ | ||||
| | * | Include OpenAI Org ID from the token config | Duy Lam | 2023-09-09 | 1 |
| | | | ||||
| * | | option to disable authorization | Martin Bielik | 2023-10-21 | 1 |
| | | | ||||
| * | | Add support for base_url option to use local models | juodumas | 2023-09-18 | 1 |
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`. | |||
| * | allow string in initial_prompt, closes #35 | Martin Bielik | 2023-06-25 | 1 |
| | | ||||
| * | optional max_tokens, fixes #42 | Martin Bielik | 2023-06-11 | 1 |
| | | ||||
| * | importing vim module, fixes #43 | Martin Bielik | 2023-05-23 | 1 |
| | | ||||
| * | print error in debug | Martin Bielik | 2023-05-19 | 1 |
| | | ||||
| * | clear echo message after completion | Martin Bielik | 2023-05-14 | 1 |
| | | ||||
| * | Allow single undo | BonaBeavis | 2023-05-04 | 1 |
| | | | | Fixes https://github.com/madox2/vim-ai/issues/14 | |||
| * | http error handling | Martin Bielik | 2023-04-26 | 1 |
| | | ||||
| * | recover for unfinished chat | Martin Bielik | 2023-04-22 | 1 |
| | | ||||
| * | empty message warning, reference #20 | Martin Bielik | 2023-04-18 | 1 |
| | | ||||
| * | nvim keyboard interrupt handling | Martin Bielik | 2023-04-16 | 1 |
| | | ||||
| * | fixed error handling | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | using messages to show error/warning | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | reusing error handler | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | reorganized request options | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | removing openai-python from docu | Martin Bielik | 2023-04-13 | 1 |
| | | ||||
| * | implemented request_timeout | Martin Bielik | 2023-04-13 | 1 |
| | | ||||
| * | poc: removing openai dependency | Martin Bielik | 2023-04-13 | 1 |
| | | ||||
| * | moving import openai check to python scripts | Martin Bielik | 2023-04-12 | 1 |
| | | ||||
| * | fixed debug variable type | Martin Bielik | 2023-04-11 | 1 |
| | | ||||
| * | fixed legacy method | Martin Bielik | 2023-04-11 | 1 |
| | | ||||
| * | added debug logging | Martin Bielik | 2023-04-11 | 1 |
| | | ||||
| * | improved error handling | Martin Bielik | 2023-04-10 | 1 |
| | | ||||
| * | populate options in chat | Martin Bielik | 2023-04-10 | 1 |
| | | ||||
| * | parse chat header options | Martin Bielik | 2023-04-09 | 1 |
| | | ||||
| * | chat engine | Martin Bielik | 2023-04-04 | 1 |
| | | ||||
| * | Merge branch 'main' into next | Martin Bielik | 2023-04-04 | 1 |
| |\ | ||||
| | * | extending config programatically | Martin Bielik | 2023-04-02 | 1 |
| | | | ||||
| * | | chat initial prompt poc | Martin Bielik | 2023-03-27 | 1 |
| |/ | ||||
| * | completion configuration | Martin Bielik | 2023-03-22 | 1 |
| | | ||||
| * | openai configuration | Martin Bielik | 2023-03-21 | 1 |
| | | ||||
| * | request timeout | Martin Bielik | 2023-03-20 | 1 |
| | | ||||
| * | chat streaming, more py3 integration | Martin Bielik | 2023-03-13 | 1 |