🚀😲AI Ripple Effect

newslounge.co · Sept 19, 2024

Good Morning. 

This is news lounge. The world of media is like a mess of old charging cables — and we’re here to help you untangle it!

🙋‍♂️Today, I have a small favor to ask. If you could take a moment to complete the survey at the bottom of this email, it would be incredibly helpful. Your feedback will allow me to assess how I can provide more value to you and ensure that I'm sending you the most relevant content.

Here’s what we got for you today:

  • Lionsgate’s AI Deal

  • How To Flux? - Case Study-

  • Get My FLUX ComfyUI Workflow

  • Transformers Impressive VFX

-Ardy

was this email forwarded to you? you can sign up here.

🚨Just a quick note: You can share this newsletter with your friends using the unique referral link at the bottom of this email. By doing so, you'll earn points that can be exchanged for some cool prizes. Let's spread the word! 😎

HEADLINE

JOHN WICK STUDIO LIONSGATE SIGNS A DEAL WITH AI STARTUP ‘RUNWAY’

Photo: Murray Close/ Lionsgate - John Wick

The deal focuses on developing and training a new AI model, tailored specifically to Lionsgate’s proprietary catalog. The model is designed to assist Lionsgate Studios, along with its filmmakers, directors, and creative talent, by enhancing their work. It generates cinematic video, which can then be refined using Runway’s suite of customizable tools.

🙋‍♂️To be clear, this isn't a licensing deal, nor is it intended to train a general video generation model. The purpose is to develop an AI model for Lionsgate's internal use only.

There are still plenty of open questions, especially regarding the use of content, likenesses, voices, performances, logos, and trademarks. It’s safe to assume Lionsgate has carefully reviewed their contracts and guild agreements, but potential data security and insurance risks remain.

This move marks the first major partnership between Lionsgate and Runway, but it’s clear the entire industry is keeping a close eye on generative AI tech and its ability to produce images and video based on text or image prompts.

The reality is that studios will inevitably assume some level of legal risk when they begin using these models in actual productions. For now, media companies might see the fine-tuned models they’re developing as early experiments to test and learn about the tech’s capabilities, all in pursuit of any potential cost savings or competitive edge.

-Ardy

CASE STUDY

HOW TO FLUX?

credit: Ardy Ala

I decided to create a YouTube video where I compare most of the samplers and schedulers in the Flux model using different prompts, seeds, and steps. I demonstrate how adding more steps in certain scenarios might not always yield the best results, and I highlight the best parameters for specific prompts.

What is FLUX?
FLUX is an image generation model similar to Stable Diffusion, but with key differences in how it processes text prompts. It utilizes two types of text encoders: a CLIP-based token encoder and a T5 (Text-to-Text Transfer Transformer) encoder. The T5 encoder is unique because it understands natural language more effectively, which has a significant influence on the output of images generated by FLUX. This dual-encoder setup makes FLUX flexible in how it interprets and generates visual content based on text inputs.

In the FLUX model, different samplers and schedulers significantly impact the outcome of your generated images. Here's a quick overview:

  • Top-performing samplers for realistic photos include DPM_Adaptive, DPM++ 2M, IP, and DPM++, Uni_PC_BH2. These samplers generally produce well-formed images across a range of subjects. Euler works well but often requires more steps for refinement.

  • Schedulers like SGM_Uniform, Simple, and Beta perform consistently well, removing noise and producing cleaner results. The DDM_Uniform scheduler is unique, sometimes creating more artistic or divergent images at certain steps, making it ideal for creative experimentation.

How to Get the Most Out of Prompts with FLUX:

  • Steps and Convergence: FLUX tends to converge in stages: at 20-25, 30-35, and 40-50 steps. Increasing steps can stabilize the image, but going beyond the first tier (20-25) may lead to changes in composition. For complex images, aim for 40 steps or higher.

  • Guidance: Adjusting the guidance value allows for more control over the style and realism. Higher guidance (3.5+) polishes and refines the image, while lower guidance (below 3) creates a more natural, artistic look. Negative guidance can lead to unexpected but often creative results.

  • Shift Settings: Fine-tuning Max and Base shift settings can sharpen details or reduce noise. Higher shifts increase detail but may introduce artifacts, while lower shifts provide softer, cleaner images. Balance with steps for best results.

🔺Download my comparison contact sheet here

🔺Download my workflow here

-Ardy

DID YOU FIND THIS 'CASE STUDY' HELPFUL?

I'd love to hear your thoughts.

Login or Subscribe to participate in polls.

SNIPPET

Instagram will move users under 16 to new teen accounts with parental supervision controls, including the ability to limit app times and see who their kids are DMing.

👾The Mill VFX crew showcases the full range of their creature and compositing talents in this captivating spot, directed by MJZ’s Matthijs van Heijningen, bringing smiles and viewers to the NFL Sunday Ticket on YouTube TV.

🤳Snapchat’s new AI feature lets you create Snapchat lenses by simply describing them.

🤿Cosm Dallas, a new sports bar is changing the game with an incredible giant screen that’s like a mini version of the Sphere in Vegas.

🚀KlingAi motion brush looks very promising. you can use this feature to generate videos from images in various aspect ratios, with a maximum length of 5 seconds.

VFX MAGIC BEHIND THE TRANSFORMERS: REVENGE OF THE FALLEN

Transformers: Revenge of the Fallen, the second installment in the franchise, marked its 15th anniversary this year. Regardless of your opinion on the film, the visual effects were undeniably impressive.
Digital Domain, one of the leading VFX studios in Hollywood, was a key player in bringing the amazing visuals to life.
One particularly striking sequence was Alice’s transformation into her Decepticon Pretender form. The challenge, as explained by Adam Sidwell, who was a Character TD at the time, was merging the CGI transformation seamlessly with live-action footage of the actress.

-NL

🙋‍♂️WE NEED YOUR FEEDBACK…

What'd you think of today's edition?

Login or Subscribe to participate in polls.

INTERESTED IN SPONSORING US OR LEARNING ABOUT AI INTEGRATION IN YOUR STUDIO?

REPLY TO THIS EMAIL TO SCHEDULE A CALL!!

👋That’s it for today, I’ll leave you with this one!!
Stop overthinking everything!!!

The torment of precautions often exceeds the dangers to be avoided. it is sometimes better to abandon one’s self to destiny!!

Napoleon

BUT, WAIT! THAT’S NOT ALL! ⬇

🎁You can get free stuff for referring your friends!!

Earn free gifts 🎁

5 referrals - “Water Bottle Stickers” 🔺15 referrals - “Mystery Box”
25 referrals - “Water Bottle” 🔺 40 referrals - “Nuphy Desk Mat”
60 referrals - “Logitech Mouse”