Skip to content

fix: Use LLM to generate unique scene prompts for video extensions#318

Open
crowwdev wants to merge 1 commit intochenyme:mainfrom
crowwdev:fix-video-scene-repetition
Open

fix: Use LLM to generate unique scene prompts for video extensions#318
crowwdev wants to merge 1 commit intochenyme:mainfrom
crowwdev:fix-video-scene-repetition

Conversation

@crowwdev
Copy link

Problem

When generating videos longer than 6 seconds, all extension rounds use the same prompt, causing scene repetition. For example, a 30-second video results in 5 nearly identical 6-second segments.

Related to Issue #316

Solution

  • Integrate grok-4.1-fast model to generate unique scene descriptions for each video round
  • Prevent scene repetition in 30-second videos by using different prompts per 6-second segment
  • Add _generate_scene_prompts_llm() in video.py for base video generation
  • Add _generate_scene_prompt_for_extend() in video_extend.py for manual extensions
  • Each scene now has natural progression without repetition

Technical Details

  • Uses local Grok API endpoint (http://localhost:8000/v1/chat/completions)
  • Model: grok-4.1-fast
  • Temperature: 0.8-0.9 (ensures diversity)
  • Each scene is based on the original concept but with different angles and actions
  • Includes fallback mechanism for stability

Testing

  • 30-second videos now generate 5 distinct scenes (6 seconds each)
  • Each scene naturally continues from the previous one
  • No repeated content or actions

- Integrate grok-4.1-fast to generate unique scene descriptions for each video round
- Prevent scene repetition in 30-second videos by using different prompts per 6-second segment
- Add _generate_scene_prompts_llm() in video.py for base video generation
- Add _generate_scene_prompt_for_extend() in video_extend.py for manual extensions
- Each scene now has natural progression without repetition

Fixes chenyme#316
async with aiohttp.ClientSession() as session:
async with session.post(
"http://localhost:8000/v1/chat/completions",
headers={"Content-Type": "application/json"},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This api need api_key ;

 api_key = get_config("app.api_key")
        headers = {"Content-Type": "application/json"}
        if api_key:
            headers["Authorization"] = f"Bearer {api_key}"

"model": "grok-4.1-fast",
"messages": [{"role": "user", "content": system_msg}],
"temperature": 0.8,
"max_tokens": 2000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add

"stream":False,

try:
async with aiohttp.ClientSession() as session:
async with session.post(
"http://localhost:8000/v1/chat/completions",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need api_key

"model": "grok-4.1-fast",
"messages": [{"role": "user", "content": system_msg}],
"temperature": 0.8,
"max_tokens": 300

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stream:False

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants