Skip to content

Commit 6f21a6b

Browse files
authored
Merge pull request #84 from forcedotcom/remove-early-docs
remove early docs
2 parents a1baec6 + b4b7778 commit 6f21a6b

File tree

2 files changed

+0
-68
lines changed

2 files changed

+0
-68
lines changed

CHANGELOG.md

Lines changed: 0 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,45 +1,5 @@
11
# Changelog
22

3-
## 1.0.1
4-
5-
### Added
6-
7-
- **`llm_gateway_generate_text()` UDF wrapper for AI-powered DataFrame transformations.**
8-
9-
New method on proxy providers to generate AI completions in DataFrame operations via a built-in UDF.
10-
11-
```python
12-
from datacustomcode import Client
13-
from pyspark.sql.functions import col
14-
15-
client = Client()
16-
17-
# Generate summaries in a DataFrame column
18-
df = df.withColumn(
19-
"summary",
20-
client._proxy.llm_gateway_generate_text(
21-
"Summarize {company}: revenue={revenue}, CEO={ceo}",
22-
{
23-
"company": col("company"),
24-
"revenue": col("revenue"),
25-
"ceo": col("ceo")
26-
},
27-
llmModelId="sfdc_ai__DefaultGPT4Omni",
28-
maxTokens=200
29-
)
30-
)
31-
```
32-
33-
**Local Development:** Returns placeholder string (doesn't execute)
34-
**Production:** Calls a built-in UDF
35-
36-
**Parameters:**
37-
- `template` (str): Prompt template with {placeholder} syntax
38-
- `values` (dict or Column): Dict mapping placeholders to Columns, or pre-built named_struct
39-
- `llmModelId` (str): Model identifier (required, e.g., "sfdc_ai__DefaultGPT4Omni")
40-
- `maxTokens` (int): Maximum tokens that will be spent on this query
41-
42-
433
## 1.0.0
444

455
### Breaking Changes

README.md

Lines changed: 0 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -169,34 +169,6 @@ client.write_to_dlo('output_DLO')
169169
> [!WARNING]
170170
> Currently we only support reading from DMOs and writing to DMOs or reading from DLOs and writing to DLOs, but they cannot mix.
171171
172-
### LLM Gateway
173-
174-
Generate AI completions in DataFrame transformations using the LLM gateway UDF.
175-
176-
```python
177-
from datacustomcode import Client
178-
from pyspark.sql.functions import col
179-
180-
client = Client()
181-
182-
# Use template with placeholders
183-
df = df.withColumn(
184-
"summary",
185-
client._proxy.llm_gateway_generate_text(
186-
"Summarize {company}: revenue={revenue}, CEO={ceo}",
187-
{
188-
"company": col("company"),
189-
"revenue": col("revenue"),
190-
"ceo": col("ceo")
191-
},
192-
llmModelId="sfdc_ai__DefaultGPT4Omni",
193-
maxTokens=200
194-
)
195-
)
196-
```
197-
198-
> [!WARNING]
199-
> This method returns a placeholder string in local development. It only makes a LLM call and spends tokens when deployed, where it calls the real LLM Gateway service via a built-in UDF.
200172

201173
## CLI
202174

0 commit comments

Comments
 (0)