Jadoo of the Djinn – AI Trailer + Tutorial

A tutorial on the workflow I used to create this concept trailer using both AI and traditional digital tools.

If you’re having trouble with consistent character generation and producing believable AI video creation, this tutorial is for you.

Here’s a step by step summary about my workflow for creating this Bollywood style movie trailer, complete with consistent characters and believable imagery, sound, music and effects. 

While AI played a large role in the production, understanding the capabilities and limitations of different tools, and how you integrate them, is critical to achieving professional results.

Steps:

Script using ChatGPT

  • I started by prompting ChatGPT to write a 60 second script for a movie trailer with a simple outline of my story.

Google Docs to edit script/break down into scenes. Comments to specify shots.

  • I edited the script in a Google Doc, using the comment feature to describe details about visuals for each shot.

Voiceover using Audacity.

  • I recorded the edited script in my own voice in Audacity. You can use any free voice recorder for this step.

Elevenlabs to change voice.

  • In ElevenLabs, I used their voice change feature to alter my voice into a more cinematic movie trailer style voice. You can use one of many text-to-voice tools out there, but they won’t retain the rhythm, emphasis or pronunciation of your intended style, like a voice changer can.

Leonardo AI to create images per shot.

  • Leonardo AI allowed me to create consistent character images using different views and angles of my own face. With this feature, I created images of the character using text prompts in scenarios as per the script doc.

Adobe Photoshop to clean up images.

  • As many of the AI images were not ideal, I cleaned them up using Adobe Photoshop. Some of the cleanup entailed generative AI expand to fill a 16:9 aspect ratio, cleanup of distorted or unwanted parts of generated images, and even complete face swaps using traditional, non-AI methods.

Runway Act One tool to make lip sync scenes for Magician and Djinn monster.

  • Runway’s Act One tool allowed me to translate videos of my own recorded actions and voice into performances by characters that were generated as static images in Leonardo.
  • I revisited ElevenLabs voice change feature as required for specific and unique character voices.

Imported images into Runway with text prompts to create videos.

  • I continued on Runway to generate videos using my library of Leonardo AI images along with text prompts and camera movement options.
  • I exported each successful video into an assets folder on my local machine. Be prepared to generate many videos using variations in prompts before you find one you can actually use.

Compiled and edited video in Adobe Premiere Pro.

  • In Adobe Premiere Pro, I laid out my script in the Audio timeline and started editing my video using clips from my assets.

Envato Elements for music and sound effects.

  • I obtained licensed music and sound effects from Envato Elements to include in my Premiere Pro project.

Added VHS effects in Adobe After Effects.

  • When I was satisfied with the final cut, the last step was to add a VHS effect to mask any too-perfectness of AI or CG, as well as add an additional old school aesthetic look to the final production.

Produce final video.

  • And there you have it. Have fun with this new technology. While it’s far from perfect, it is in its initial stages and takes practice, patience and a lot of trial and error to produce something that you can truly appreciate.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *