summaryrefslogtreecommitdiff
path: root/py/utils.py (follow)
Commit message (Collapse)AuthorAgeFiles
* fix selection include extra content when the user is in visual modecposture2023-12-021
|
* fixed python3.12 slash escaping, fixes #61Martin Bielik2023-11-011
|
* Merge remote-tracking branch 'origin/main' into base-url-configMartin Bielik2023-10-211
|\
| * Include OpenAI Org ID from the token configDuy Lam2023-09-091
| |
* | option to disable authorizationMartin Bielik2023-10-211
| |
* | Add support for base_url option to use local modelsjuodumas2023-09-181
|/ | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
* allow string in initial_prompt, closes #35Martin Bielik2023-06-251
|
* optional max_tokens, fixes #42Martin Bielik2023-06-111
|
* importing vim module, fixes #43Martin Bielik2023-05-231
|
* print error in debugMartin Bielik2023-05-191
|
* clear echo message after completionMartin Bielik2023-05-141
|
* Allow single undoBonaBeavis2023-05-041
| | | Fixes https://github.com/madox2/vim-ai/issues/14
* http error handlingMartin Bielik2023-04-261
|
* recover for unfinished chatMartin Bielik2023-04-221
|
* empty message warning, reference #20Martin Bielik2023-04-181
|
* nvim keyboard interrupt handlingMartin Bielik2023-04-161
|
* fixed error handlingMartin Bielik2023-04-151
|
* using messages to show error/warningMartin Bielik2023-04-151
|
* reusing error handlerMartin Bielik2023-04-151
|
* reorganized request optionsMartin Bielik2023-04-151
|
* removing openai-python from docuMartin Bielik2023-04-131
|
* implemented request_timeoutMartin Bielik2023-04-131
|
* poc: removing openai dependencyMartin Bielik2023-04-131
|
* moving import openai check to python scriptsMartin Bielik2023-04-121
|
* fixed debug variable typeMartin Bielik2023-04-111
|
* fixed legacy methodMartin Bielik2023-04-111
|
* added debug loggingMartin Bielik2023-04-111
|
* improved error handlingMartin Bielik2023-04-101
|
* populate options in chatMartin Bielik2023-04-101
|
* parse chat header optionsMartin Bielik2023-04-091
|
* chat engineMartin Bielik2023-04-041
|
* Merge branch 'main' into nextMartin Bielik2023-04-041
|\
| * extending config programaticallyMartin Bielik2023-04-021
| |
* | chat initial prompt pocMartin Bielik2023-03-271
|/
* completion configurationMartin Bielik2023-03-221
|
* openai configurationMartin Bielik2023-03-211
|
* request timeoutMartin Bielik2023-03-201
|
* chat streaming, more py3 integrationMartin Bielik2023-03-131