Skip to content
/ gptcli Public
forked from evilpan/gptcli

ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)

License

Notifications You must be signed in to change notification settings

aimlapi/gptcli

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Take chatGPT into command line.

stream

Setup

  1. clone this repo
  2. pip3 install -U -r requirements.txt
  3. copy demo_config.json to config.json
  4. get your OPENAI_API_KEY and put it in config.json

Run

$ ./gptcli.py -h
usage: gptcli.py [-h] [-c CONFIG]

options:
  -h, --help  show this help message and exit
  -c CONFIG   path to your config.json (default: config.json)

Sample config.json:

{
    "api_key": "<API_TOKEN>",
    "api_base": "https://api.aimlapi.com",
    "api_type": "open_ai",
    "api_version": null,
    "model": "gpt-3.5-turbo",
    "context": 2,
    "stream": true,
    "stream_render": true,
    "showtokens": false,
    "proxy": null,
    "prompt": [
        { "role": "system", "content": "If your response contains code, show with syntax highlight, for example ```js\ncode\n```" }
    ]
}
  • (required) api_key: OpenAI's api key. will read from OPENAI_API_KEY envronment variable if not set
  • (optional) api_base: OpenAI's api base url. Can set to a server reverse proxy, for example Azure OpenAI Service or chatgptProxyAPI. By default it's from OPENAI_API_BASE or just https://api.openai.com/v1;
  • (optional) api_type: OpenAI's api type, read from env OPENAI_API_TYPE by default;
  • (optional) api_version: OpenAI's api version, read from env OPENAI_API_VERSION by default;
  • (optional) api_organization: OpenAI's organization info, read from env OPENAI_ORGANIZATION by default;
  • (optional) model: OpenAI's chat model, by default it's gpt-3.5-turbo; choices are:
    • gpt-3.5-turbo
    • gpt-4
    • gpt-4-32k
  • (optional) context: Chat session context, choices are:
    • 0: no context provided for every chat request, cost least tokens, but AI don't kown what you said before;
    • 1: only use previous user questions as context;
    • 2: use both previous questions and answers as context, would cost more tokens;
  • (optional) stream: Output in stream mode;
  • (optional) stream_render: Render markdown in stream mode, you can disable it to avoid some UI bugs;
  • (optional) showtokens: Print used tokens after every chat;
  • (optional) proxy: Use http/https/socks4a/socks5 proxy for requests to api_base;
  • (optional) prompt: Customize your prompt. This will appear in every chat request;

Console help (with tab-complete):

gptcli> .help -v

gptcli commands (use '.help -v' for verbose/'.help <topic>' for details):
======================================================================================================
.edit                 Run a text editor and optionally open a file with it
.help                 List available commands or provide detailed help for a specific command
.load                 Load conversation from Markdown/JSON file
.multiline            input multiple lines, end with ctrl-d(Linux/macOS) or ctrl-z(Windows). Cancel
                      with ctrl-c
.prompt               Load different prompts
.quit                 Exit this application
.reset                Reset session, i.e. clear chat history
.save                 Save current conversation to Markdown/JSON file
.set                  Set a settable parameter or show current settings of parameters
.usage                Tokens usage of current session / last N days, or print detail billing info

Run in Docker:

# build
$ docker build -t gptcli:latest .

# run
$ docker run -it --rm -v $PWD/.key:/gptcli/.key gptcli:latest -h

# for host proxy access:
$ docker run --rm -it -v $PWD/config.json:/gptcli/config.json --network host gptcli:latest -c /gptcli/config.json

Feature

  • Single Python script
  • Session based
  • Markdown support with code syntax highlight
  • Stream output support
  • Proxy support (HTTP/HTTPS/SOCKS4A/SOCKS5)
  • Multiline input support (via .multiline command)
  • Save and load session from file (Markdown/JSON) (via .save and .load command)
  • Print tokens usage in realtime, and tokens usage for last N days, and billing details
  • Integrate with llama_index to support chatting with documents

LINK

About

ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Dockerfile 0.9%