| Commit message (Collapse) | Author | Age | Files | |
|---|---|---|---|---|
| * | removed unused import | Martin Bielik | 2023-10-21 | 1 |
| | | ||||
| * | endpoint_url config | Martin Bielik | 2023-10-21 | 1 |
| | | ||||
| * | base_url extracted to config, docu | Martin Bielik | 2023-10-21 | 1 |
| | | ||||
| * | Add support for base_url option to use local models | juodumas | 2023-09-18 | 1 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`. | |||
| * | allow string in initial_prompt, closes #35 | Martin Bielik | 2023-06-25 | 1 |
| | | ||||
| * | print error in debug | Martin Bielik | 2023-05-19 | 1 |
| | | ||||
| * | clear echo message after completion | Martin Bielik | 2023-05-14 | 1 |
| | | ||||
| * | pass config as a parameter | Martin Bielik | 2023-04-22 | 1 |
| | | ||||
| * | reusing error handler | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | reorganized request options | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | using scoped variables | Martin Bielik | 2023-04-15 | 1 |
| | | ||||
| * | implemented request_timeout | Martin Bielik | 2023-04-13 | 1 |
| | | ||||
| * | poc: removing openai dependency | Martin Bielik | 2023-04-13 | 1 |
| | | ||||
| * | moving import openai check to python scripts | Martin Bielik | 2023-04-12 | 1 |
| | | ||||
| * | added debug logging | Martin Bielik | 2023-04-11 | 1 |
| | | ||||
| * | populate options in chat | Martin Bielik | 2023-04-10 | 1 |
| | | ||||
| * | parse chat header options | Martin Bielik | 2023-04-09 | 1 |
| | | ||||
| * | passing prompt as param | Martin Bielik | 2023-04-05 | 1 |
| | | ||||
| * | combine initial prompt with empty chat prompt | Martin Bielik | 2023-04-04 | 1 |
| | | ||||
| * | chat engine | Martin Bielik | 2023-04-04 | 1 |
| | | ||||
| * | Merge branch 'main' into next | Martin Bielik | 2023-04-04 | 1 |
| |\ | ||||
| | * | trim newlines from the prompt, fixes #5 | Martin Bielik | 2023-04-03 | 1 |
| | | | ||||
| * | | chat initial prompt poc | Martin Bielik | 2023-03-27 | 1 |
| |/ | ||||
| * | improved request timeout message | Martin Bielik | 2023-03-26 | 1 |
| | | ||||
| * | handle connection timeout errors | Martin Bielik | 2023-03-25 | 1 |
| | | ||||
| * | completion configuration | Martin Bielik | 2023-03-22 | 1 |
| | | ||||
| * | openai configuration | Martin Bielik | 2023-03-21 | 1 |
| | | ||||
| * | request timeout | Martin Bielik | 2023-03-20 | 1 |
| | | ||||
| * | ctrl c to cancel completion | Martin Bielik | 2023-03-14 | 1 |
| | | ||||
| * | stream complete/edit commands | Martin Bielik | 2023-03-13 | 1 |
| | | ||||
| * | chat streaming, more py3 integration | Martin Bielik | 2023-03-13 | 1 |
| | | ||||
| * | getting rid of global dependencies | Martin Bielik | 2023-03-12 | 1 |
| | | ||||
| * | using openai api directly | Martin Bielik | 2023-03-03 | 1 |