summaryrefslogtreecommitdiff
path: root/py/complete.py (unfollow)
Commit message (Collapse)AuthorFiles
2024-12-12fixed complete command roles after refactoringMartin Bielik1
2024-12-08print prompt in debug modeMartin Bielik1
2024-12-07fixed options normalizationMartin Bielik1
2024-12-05fixed stream=0 in chat engineMartin Bielik1
2024-10-08support non streaming apiMartin Bielik1
2024-03-09parse role optionsMartin Bielik1
2024-03-09read role prompt from configMartin Bielik1
2023-12-23import vim before utils, fixes #43Martin Bielik1
2023-12-02fix selection include extra content when the user is in visual modecposture1
2023-10-21removed unused importMartin Bielik1
2023-10-21endpoint_url configMartin Bielik1
2023-10-21base_url extracted to config, docuMartin Bielik1
2023-09-18Add support for base_url option to use local modelsjuodumas1
For example, you can start llama-cpp-python like this (it emulates the openai api): ```sh CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install 'llama-cpp-python[server]' wget https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/resolve/main/codellama-13b-instruct.Q5_K_M.gguf python3 -m llama_cpp.server --n_gpu_layers 100 --model codellama-13b-instruct.Q5_K_M.gguf ``` Then set the API url in your `.vimrc`: ```vim let g:vim_ai_chat = { \ "engine": "chat", \ "options": { \ "base_url": "http://127.0.0.1:8000", \ }, \ } ``` And chat with the locally hosted AI using `:AIChat`. The change in utils.py was needed because llama-cpp-python adds a new line to the final response: `[DONE]^M`.
2023-06-25allow string in initial_prompt, closes #35Martin Bielik1
2023-05-19print error in debugMartin Bielik1
2023-05-14clear echo message after completionMartin Bielik1
2023-04-22pass config as a parameterMartin Bielik1
2023-04-15reusing error handlerMartin Bielik1
2023-04-15reorganized request optionsMartin Bielik1
2023-04-15using scoped variablesMartin Bielik1
2023-04-13implemented request_timeoutMartin Bielik1
2023-04-13poc: removing openai dependencyMartin Bielik1
2023-04-12moving import openai check to python scriptsMartin Bielik1
2023-04-11added debug loggingMartin Bielik1
2023-04-10populate options in chatMartin Bielik1
2023-04-09parse chat header optionsMartin Bielik1
2023-04-05passing prompt as paramMartin Bielik1
2023-04-04combine initial prompt with empty chat promptMartin Bielik1
2023-04-04chat engineMartin Bielik1
2023-04-03trim newlines from the prompt, fixes #5Martin Bielik1
2023-03-27chat initial prompt pocMartin Bielik1
2023-03-26improved request timeout messageMartin Bielik1
2023-03-25handle connection timeout errorsMartin Bielik1
2023-03-22completion configurationMartin Bielik1
2023-03-21openai configurationMartin Bielik1
2023-03-20request timeoutMartin Bielik1
2023-03-14ctrl c to cancel completionMartin Bielik1
2023-03-13stream complete/edit commandsMartin Bielik1
2023-03-13chat streaming, more py3 integrationMartin Bielik1
2023-03-12getting rid of global dependenciesMartin Bielik1
2023-03-03using openai api directlyMartin Bielik1