live-writing computational poetry system: poet and/as LLMs co-create an evolving disturbance-informed intermedia #GraphPoem - injected poetic-feature-guided LLM “noise” triggers dynamic fusion of embodied (topographical flaneuring) writing and data-driven (topological flaneuring) processes
Let the Noise In is a live-writing poetry system where poet and language models share the same unfolding work. As the poem evolves, a Python engine monitors its formal-sonic, affective, and temporal-structural patterns and detects moments of saturation or redundancy. That is when the system invites generative “noise” shaped by feature analysis and intermedia translation, producing a continuously shifting #graphpoem -- the interaction between an embodied writer (the topographical flâneur) and a data-driven model system (the topological flâneur) -- a living field of disturbance.
While you are writing (or feeding) a poem line by line, the system:
1 Maintains a rolling window of recent lines.
2 Extracts formal and affective features from the evolving text, including:
-
sonic/formal traits (rhythmic variation, sound distributions, script diversity, etc.)
◦ affective tendencies (arousal, valence–like proxies)
◦ temporal dynamics (repetition, drift, rupture, motif density)
3 Tracks saturation and redundancy across time.
4 When certain thresholds are crossed, it:
-
searches a feature-annotated corpus of multilingual poetry and intermedia materials
◦ selects fragments that are dynamically near or far from the poem’s current state
◦ builds a prompt that frames these as linguistic and intermedia noise
5 A language model then generates an intervention, which is injected into the poem stream.
You see an interleaved output of:
-
your poem lines
-
machine-generated noise
-
occasional “silent” or minimal interruptions
-
intermedia output (WAV & MP4 files)
This system treats writing as an ecology of interacting processes rather than a single authorial voice. The poem is a site where:
-
embodied perception,
-
archived cultural material,
-
algorithmic pattern recognition, and
-
stochastic model output
continually interfere with and rewrite one another.
The LLM is not cast as a co-author but as urban, infrastructural, and data noise —- sometimes resonant, sometimes disruptive, always situated within a larger field of relations.
At a high level, each cycle looks like this:
Poem line → Feature extraction → History update
↓
Saturation / rupture check
↓
If threshold crossed:
-
Build noise profile
-
Sample corpus by feature distance
-
Build prompt
-
Call language model
-
Inject noise into poem stream
Python: 3.9+ recommended
Core libraries used in the notebook include:
-
numpy
-
nltk
-
regex
-
torch
-
transformers
-
huggingface_hub
-
openai
-
requests
-
pickle
-
unicodedata, collections, etc. (standard library)
Install basics with:
pip install numpy nltk regex torch transformers huggingface_hub openai requests
You will also need to download some NLTK resources:
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('words')
Parts of the system are designed to call external models.
You may need:
-
an OpenAI API key (for GPT models)
-
a Hugging Face token (for hosted models such as Mixtral or QWEN)
Set them as environment variables, for example:
export OPENAI_API_KEY="your_key_here"
export HUGGINGFACEHUB_API_TOKEN="your_token_here"
If you don’t configure APIs, you can still adapt the system to use only local or corpus-based noise.
- Poetry / Text Corpus
The notebook expects a JSON file like:
asymptote_multilingual_cleaned_intermedia_analyses_stanzas_and_translations.json
Each entry contains:
-
text and/or translation
-
precomputed feature representations (features_trans in the script)
You can replace this with your own annotated corpus, as long as feature vectors follow a similar structure. The feacture vectors in the JSON file fed to the script in the current version were obtained by feeding the poems in the database to the sonic-affect-temporal analysis pipeline at the top of the notebook. You can do the same with yours.
- Live Poem Input
The system reads from a text file as if it were a live writing stream:
for line in poem_input_stream('margento_hk_suite_live.txt'):
Replace this file with your own: my_poem_in_progress.txt
Each new line becomes part of the evolving poem state.
- (Optional) Audio / Video Feature Data
Later sections of the notebook include tools for integrating audio and video feature analyses (e.g., videopoems). These rely on precomputed .pkl files containing:
-
affect vectors
-
temporal segmentation features
-
structural descriptors
These modules are optional and can be ignored if you are working text-only.
1 Open the notebook.
2 Install dependencies.
3 Make sure:
-
your poem file path is correct
-
your corpus JSON is available
-
API keys (if used) are set
4 Run all cells.
5 Watch the console output as:
-
your poem lines appear
-
the system occasionally injects model-generated noise
You can adjust:
-
window size
-
saturation thresholds
-
model list
-
noise intensity parameters
to shape how often and how aggressively the system intervenes.
You can reuse this system with:
-
your own poems-in-progress
-
different literary corpora
-
other languages
-
your own intermedia archives (audio, video, performance)
To do so, you’ll mainly need to:
1 Provide new texts
2 Compute or approximate feature representations
3 Plug them into the same sampling + prompting structure
This project works with:
-
multilingual literary texts
-
translations
-
performance and videopoetry/intermedia materials
Please ensure you have the right to use and process any texts or media you include. When using contemporary authors’ work, consider attribution, licensing, and context.
This repository is a poetic environment —- a way of staging encounters between bodies, languages, archives, and models. It is meant to be forked, misused, re-tuned, and walked through like a city.
Let the noise in.