You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This can be done via: Iterm2 (Mac), Guake (Ubuntu), scratchpad (i3/sway), or the quake mode for the Windows Terminal.
13
13
</h3>
14
14
15
-
At the time of writing, use `text-davinci-003`. Davinci was released together with ChatGPT as part of the [GPT-3.5 series](https://platform.openai.com/docs/model-index-for-researchers/models-referred-to-as-gpt-3-5) and they are very comparable in terms of capabilities; ChatGPT is more verbose.
16
-
17
15
## Productivity benefits
18
16
19
17
- The terminal starts more quickly and requires **less resources** than a browser.
@@ -26,35 +24,11 @@ At the time of writing, use `text-davinci-003`. Davinci was released together wi
26
24
Download the binary for your system from [Releases](https://github.com/rikhuijzer/ata/releases).
27
25
If you're running Arch Linux, then you can use the AUR packages: [ata](https://aur.archlinux.org/packages/ata), [ata-git](https://aur.archlinux.org/packages/ata-git), or [ata-bin](https://aur.archlinux.org/packages/ata-bin).
28
26
29
-
Request an API key via <https://beta.openai.com/account/api-keys>.
30
-
Next, set the API key, the model that you want to use, and the maximum amount of tokens that the server can respond with in `ata.toml`:
31
-
32
-
```toml
33
-
api_key = "<YOUR SECRET API KEY>"
34
-
model = "text-davinci-003"
35
-
max_tokens = 500
36
-
temperature = 0.8
37
-
```
38
-
39
-
Here, replace `<YOUR SECRET API KEY>` with your API key, which you can request via https://beta.openai.com/account/api-keys.
40
-
41
-
The `max_tokens` sets the maximum amount of tokens that the server will answer with.
42
-
43
-
The `temperature` sets the `sampling temperature`. From the OpenAI API docs: "What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer." According to Stephen Wolfram [[1]], setting it to a higher value such as 0.8 will likely work best in practice.
To specify the API key and some basic model settings, start the application.
28
+
It should give an error and the option to create a configuration file called `ata.toml` for you.
29
+
Press `y` and `ENTER` to create a `ata.toml` file.
46
30
47
-
Next, run:
48
-
49
-
```sh
50
-
$ ata --config=ata.toml
51
-
```
52
-
53
-
Or, change the current directory to the one where `ata.toml` is located and run
54
-
55
-
```sh
56
-
$ ata
57
-
```
31
+
Next, request an API key via <https://beta.openai.com/account/api-keys> and update the key in the example configuration file.
58
32
59
33
For more information, see:
60
34
@@ -66,8 +40,10 @@ $ ata --help
66
40
67
41
**How much will I have to pay for the API?**
68
42
69
-
Using OpenAI's API is quite cheap, I have been using this terminal application heavily for a few weeks now and my costs are about $0.15 per day ($4.50 per month).
70
-
The first $18.00 for free, so you can use it for about 120 days (4 months) before having to pay.
43
+
Using OpenAI's API for chat is very cheap.
44
+
Let's say that an average response is about 500 tokens, so costs $0.001.
45
+
That means that if you do 100 requests per day, then that will cost you about $0.10.
46
+
OpenAI grants you $18.00 for free, so you can use the API for about 180 days (6 months) before having to pay.
Copy file name to clipboardExpand all lines: ata/src/help.rs
+46-8Lines changed: 46 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,9 @@
1
+
usecrate::config;
2
+
use rustyline::Editor;
3
+
use std::fs::File;
4
+
use std::fs;
5
+
use std::io::Writeas _;
6
+
1
7
pubfncommands(){
2
8
println!("
3
9
Ctrl-A, Home Move cursor to the beginning of line
@@ -29,29 +35,61 @@ Thanks to <https://github.com/kkawakam/rustyline#emacs-mode-default-mode>.
29
35
");
30
36
}
31
37
38
+
constEXAMPLE_TOML:&str = r#"api_key = "<YOUR SECRET API KEY>"
39
+
model = "gpt-3.5-turbo"
40
+
max_tokens = 1000
41
+
temperature = 0.8"#;
42
+
32
43
pubfnmissing_toml(args:Vec<String>){
44
+
let default_path = config::default_path(None);
33
45
eprintln!(
34
46
r#"
35
-
Could not find the file `ata.toml`. To fix this, use `{} --config=<Path to ata.toml>` or have `ata.toml` in the current dir.
47
+
Could not find a configuration file.
36
48
37
-
For example, make a new file `ata.toml` in the current directory with the following content (the text between the ```):
49
+
To fix this, use `{} --config=<Path to ata.toml>` or create `{1}`. For the last option, type `y` to write the following example file:
38
50
39
51
```
40
-
api_key = "<YOUR SECRET API KEY>"
41
-
model = "text-davinci-003"
42
-
max_tokens = 500
43
-
temperature = 0.8
52
+
{EXAMPLE_TOML}
44
53
```
45
54
46
-
Here, replace `<YOUR SECRET API KEY>` with your API key, which you can request via https://beta.openai.com/account/api-keys.
55
+
Next, replace `<YOUR SECRET API KEY>` with your API key, which you can request via https://beta.openai.com/account/api-keys.
47
56
48
57
The `max_tokens` sets the maximum amount of tokens that the server will answer with.
49
58
50
59
The `temperature` sets the `sampling temperature`. From the OpenAI API docs: "What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer." According to Stephen Wolfram [1], setting it to a higher value such as 0.8 will likely work best in practice.
0 commit comments