summaryrefslogtreecommitdiff
path: root/py (follow)
Commit message (Collapse)AuthorAgeFiles
* fix selection include extra content when the user is in visual modecposture2023-12-023
|
* fixed python3.12 slash escaping, fixes #61Martin Bielik2023-11-011
|
* removed unused importMartin Bielik2023-10-212
|
* Merge remote-tracking branch 'origin/main' into base-url-configMartin Bielik2023-10-211
|\
| * Include OpenAI Org ID from the token configDuy Lam2023-09-091
| |
* | endpoint_url configMartin Bielik2023-10-212
| |
* | option to disable authorizationMartin Bielik2023-10-211
| |
* | base_url extracted to config, docuMartin Bielik2023-10-212
| |
* | Add support for base_url option to use local modelsjuodumas2023-09-183
|/ | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
* allow string in initial_prompt, closes #35Martin Bielik2023-06-253
|
* optional max_tokens, fixes #42Martin Bielik2023-06-111
|
* importing vim module, fixes #43Martin Bielik2023-05-231
|
* print error in debugMartin Bielik2023-05-193
|
* clear echo message after completionMartin Bielik2023-05-143
|
* Allow single undoBonaBeavis2023-05-041
| | | Fixes https://github.com/madox2/vim-ai/issues/14
* http error handlingMartin Bielik2023-04-261
|
* pass config as a parameterMartin Bielik2023-04-222
|
* recover for unfinished chatMartin Bielik2023-04-222
|
* move prompt to pythonMartin Bielik2023-04-211
|
* empty message warning, reference #20Martin Bielik2023-04-181
|
* nvim keyboard interrupt handlingMartin Bielik2023-04-161
|
* improved undo sequence breakMartin Bielik2023-04-151
|
* fixed error handlingMartin Bielik2023-04-151
|
* using messages to show error/warningMartin Bielik2023-04-151
|
* reusing error handlerMartin Bielik2023-04-153
|
* reorganized request optionsMartin Bielik2023-04-153
|
* using scoped variablesMartin Bielik2023-04-152
|
* removing openai-python from docuMartin Bielik2023-04-131
|
* implemented request_timeoutMartin Bielik2023-04-133
|
* poc: removing openai dependencyMartin Bielik2023-04-133
|
* moving import openai check to python scriptsMartin Bielik2023-04-123
|
* fixed debug variable typeMartin Bielik2023-04-111
|
* fixed legacy methodMartin Bielik2023-04-111
|
* added debug loggingMartin Bielik2023-04-113
|
* improved error handlingMartin Bielik2023-04-102
|
* populate options in chatMartin Bielik2023-04-103
|
* parse chat header optionsMartin Bielik2023-04-093
|
* passing prompt as paramMartin Bielik2023-04-051
|
* combine initial prompt with empty chat promptMartin Bielik2023-04-042
|
* chat engineMartin Bielik2023-04-043
|
* Merge branch 'main' into nextMartin Bielik2023-04-043
|\
| * break undo sequence after initial promptMartin Bielik2023-04-031
| |
| * handle roles in pythonMartin Bielik2023-04-031
| |
| * trim newlines from the prompt, fixes #5Martin Bielik2023-04-032
| |
| * extending config programaticallyMartin Bielik2023-04-021
| |
* | chat initial prompt pocMartin Bielik2023-03-273
|/
* improved request timeout messageMartin Bielik2023-03-262
|
* handle connection timeout errorsMartin Bielik2023-03-252
|
* completion configurationMartin Bielik2023-03-223
|
* openai configurationMartin Bielik2023-03-213
|