A new demo video thanks to a vibe coded video editing Pipeline:
https://npub1xj4hakl5evf6yhd9ppuejw5fkn02gjzc6r3wx3m3mjddzcc4jggsgtffr8.nsite.lol/english.html
Login to reply
Replies (4)
How to use a #TollGate in less than a minute:
https://video.nostr.build/c2e1fdf91448af9137f2fa391bd472c2e5b644020550bd6ff5e0fcd507ca80ec.mp4
nostr:nevent1qqsvmjqrmf7vfv5kn8ntdkplxs9e4anefdjvn0xasxvsp70gadezuucppemhxue69uhkummn9ekx7mp0qgsv8c37kh3aqrcckt60tzxcek79fpjghemphhvssyscdh6xq0tu42grqsqqqqqp24lqcd
Video editing pipeline? 🤔
1. Your LLM populates a JSON file with the text that you want to have in the video. One line per video snippet.
2. Your LLM translates the text to all the languages that you want to support and adds them to the JSON file.
3. Some scripts take your raw video and extract frames at a regular interval.
4. You populate the JSON file with information about which frame number maps to the beginning and end of the text entries.
5. The pipeline generates audio snippets for the text entries, adjusts the speed of the video to align with the duration of the audio snippets and connects these video snippets into a single edited video.
You now have an edited video dubbed in multiple languages which you can manipulate by changing the frame numbers in the JSON file.
https://gitworkshop.dev/npub1c03rad0r6q833vh57kyd3ndu2jry30nkr0wepqfpsm05vq7he25slryrnw/tollgate-demo-video
On second thought, each stage of the pipeline should be writing to a different json file. It needs some work to make it more maintainable..