summaryrefslogtreecommitdiff
path: root/py/complete.py (follow)
Commit message (Collapse)AuthorAgeFiles
* import vim before utils, fixes #43Martin Bielik2023-12-231
|
* fix selection include extra content when the user is in visual modecposture2023-12-021
|
* removed unused importMartin Bielik2023-10-211
|
* endpoint_url configMartin Bielik2023-10-211
|
* base_url extracted to config, docuMartin Bielik2023-10-211
|
* Add support for base_url option to use local modelsjuodumas2023-09-181
| | | | | | | | | | | | | | | | | | | | | | | | | | For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
* allow string in initial_prompt, closes #35Martin Bielik2023-06-251
|
* print error in debugMartin Bielik2023-05-191
|
* clear echo message after completionMartin Bielik2023-05-141
|
* pass config as a parameterMartin Bielik2023-04-221
|
* reusing error handlerMartin Bielik2023-04-151
|
* reorganized request optionsMartin Bielik2023-04-151
|
* using scoped variablesMartin Bielik2023-04-151
|
* implemented request_timeoutMartin Bielik2023-04-131
|
* poc: removing openai dependencyMartin Bielik2023-04-131
|
* moving import openai check to python scriptsMartin Bielik2023-04-121
|
* added debug loggingMartin Bielik2023-04-111
|
* populate options in chatMartin Bielik2023-04-101
|
* parse chat header optionsMartin Bielik2023-04-091
|
* passing prompt as paramMartin Bielik2023-04-051
|
* combine initial prompt with empty chat promptMartin Bielik2023-04-041
|
* chat engineMartin Bielik2023-04-041
|
* Merge branch 'main' into nextMartin Bielik2023-04-041
|\
| * trim newlines from the prompt, fixes #5Martin Bielik2023-04-031
| |
* | chat initial prompt pocMartin Bielik2023-03-271
|/
* improved request timeout messageMartin Bielik2023-03-261
|
* handle connection timeout errorsMartin Bielik2023-03-251
|
* completion configurationMartin Bielik2023-03-221
|
* openai configurationMartin Bielik2023-03-211
|
* request timeoutMartin Bielik2023-03-201
|
* ctrl c to cancel completionMartin Bielik2023-03-141
|
* stream complete/edit commandsMartin Bielik2023-03-131
|
* chat streaming, more py3 integrationMartin Bielik2023-03-131
|
* getting rid of global dependenciesMartin Bielik2023-03-121
|
* using openai api directlyMartin Bielik2023-03-031