• NewsLounge
  • Posts
  • 🚀A new era for media, and the internet

🚀A new era for media, and the internet

newslounge.co · June 27, 2024

Good Morning. 

Welcome to another edition of newslounge. If you're new here, every week I share what I find interesting in the world of VFX and AI.

NBC recently revealed that an AI model trained on Al Michaels' voice will be used to narrate personalized Olympic highlight reels. These rapid shifts in the media landscape signal the dawn of a new era for the internet, where much of the content we consume will be AI-generated or AI-assisted. As media giants embrace this trend, we can expect to see a significant increase in AI-driven content as technology continues to evolve.

On today’s menu:

  • RunwayML Catches Up

  • Stable Diffusion 3 vs SDXL

  • Houdini 20.5 keynote

  • Cloude 3.5 Sonnet vs ChatGPT4

  • Launch Of The Day

  • The Future Of Video

-Ardy

was this email forwarded to you? you can sign up here.

🚨Just a quick note: You can share this newsletter with your friends using the unique referral link at the bottom of this email. By doing so, you'll earn points that can be exchanged for some cool prizes. Let's spread the word! 😎

CASE STUDY

RUNWAY CATCHES UP IN THE VIDEO GENERATION RACE WITH GEN-3 ALPHA

This Month, two video generation models were publicly released: Kling by Kuaishou Technology, a company behind a popular TikTok competitor in China, and Luma AI's Dream Machine. These releases have likely intensified the video generation race, adding pressure on OpenAI and companies like Runway and Pika. Fortunately, Runway was already developing its next-generation models, the Gen 3 family. The company recently shared a preview of the Gen 3 Alpha, the first model in this new series.

Runway Gen-3 Alpha marks a significant leap in high-fidelity, controllable video generation. Developed on a new infrastructure for large-scale multimodal training. It surpasses Gen-2 in consistency, and motion.
This model, trained on both videos and images, powers Runway's Text to Video, Image to Video, and Text to Image tools, along with control modes like Motion Brush, Advanced Camera Controls, and Director Mode. Gen-3 Alpha introduces safeguards, including a visual moderation system and C2PA provenance standards.

The technology behind Gen-3 Alpha enables precise key-framing and imaginative transitions. It excels in generating expressive human characters and diverse cinematic styles, making it a powerful tool for artists and storytellers.

🤿Runway Gen-3 Alpha is part of a rapidly evolving landscape of AI video generation models, competing with several other notable platforms. Here's an overview of some key competitors and the technology driving improvements in these models:

OpenAI's Sora: Considered one of the leading models, Sora has set a high bar for video quality and consistency.

Google's VO: Another strong competitor that has shown impressive results comparable to Sora.

Shangu's VDU: A Chinese company that has produced high-quality video generations.

Qu Show's Cing: Another model from China that has demonstrated capabilities on par with Sora.

Luma Labs' Dream Machine: This platform offers immediate access and utilization. The results have been impressive, even though the wait time can be lengthy due to its high accessibility. Luma’s Anime Machine is also extremely promising..

Kuaishou Technology's Kling: Recently released, this model comes from the company behind a popular TikTok competitor in China

-Ardy

Credit: Ardy Ala

💡I made a quick video to explore Stable Diffusion 3 in comparison to SDXL workflows. Here are a few things I learned along the way:

  1. Use dimensions that are multiples of 64 for the best results, although you might have to change it to hack the model to avoid odd censorships for specific prompts.

  2. Handle Negative Prompts Carefully: Apply negative prompts selectively, only during specific time steps (e.g., the first 10%) to avoid overpowering your image.

  3. Experiment with Samplers: Stick to Euler and DPM PP 2M samplers for reliable outcomes.

  4. Fine-Tune CFG Values: Keep CFG values between 3.5 to 5.5 to balance prompt adherence and image quality.

  5. Combine Styles Thoughtfully: Mixing 2D and realistic elements requires precise prompting and multiple trials to achieve the desired effect.

🔺Watch the video [HERE]

-Ardy

SNIPPET

⛔Venture capitalist firm YCombinator joined around 140 startup founders to condemn a California bill that would require AI companies to conduct risk assessments before releasing new models.

🚀Claude 3.5 Sonnet was recently released by Anthropic, and people are discovering incredible new use cases that weren't possible with GPT-4.

🕵️‍♀️ Open-Sora, presents Open-Sora-Plan v1.1.0, which significantly improves video generation quality and duration.

👩‍💻Jace.ai The AI Assistant That Automates Web Tasks Without Human Guidance. Jace can perform various actions within a web browser, from simple tasks like booking hotels and flights to more complex operations such as setting up recruitment pipelines on LinkedIn or launching marketing campaigns

😲Houdini 20.5 Launch took place in Paris, France on June 18th, here are the keynotes.

🎥CMR-M1, the first camera that directly integrates generative AI technology into the video capture process

👾NVIDIA Releases Open Synthetic Data Generation Pipeline for Training Large Language Models

LAUNCH OF THE DAY

🚀As a follow-up to my recent post about the Dash plugin for Unreal Engine, I'd like to introduce their exciting new update that makes it a valuable tool:

What is DASH?
Dash is an Unreal Engine 5 plugin that simplifies creating natural environments, from serene terrains to lush forests, for all skill levels.

This is a fairly major release and introduces new tools such as the vines tool and features like property references, significantly increasing your productivity. Additionally, They’ve revamped their AI tagging solution to provide more accurate tags at a much faster rate.

One major feature? AI Tagging with GPT-4
They have integrated GPT-4 as their main asset tagging solution. It's incredibly fast and provides state-of-the-art quality. They use an enterprise plan that ensures your data remains secure and isn’t used for training by OpenAI or them.

🔺Watch the video [HERE]
👩‍💻Learn more [HERE]

-Sponsored

TRENDING

EXPERIMENTAL AI-POWERED MOVIE CAMERA

CMR-M1

SpecialGuestX, a leading creative technology agency, and 1stAveMachine, a mixed-media production company, have joined forces to unveil the CMR-M1 – the first-ever AI-powered movie camera. Although still in its prototype stages, the CMR-M1 promises to revolutionize filmmaking by enabling users to capture AI-generated video content in real-time.

Could this be the future of video?

-NL

🙋‍♂️WE NEED YOUR FEEDBACK…

What'd you think of today's edition?

Login or Subscribe to participate in polls.

INTERESTED IN SPONSORING US OR LEARNING ABOUT AI INTEGRATION IN YOUR STUDIO?

REPLY TO THIS EMAIL TO SCHEDULE A CALL!!

😂😂That’s it for today, I’ll leave you with this!!😂😂

-newslounge

BUT, WAIT! THAT’S NOT ALL! ⬇

🎁You can get free stuff for referring your friends!!

Earn free gifts 🎁

5 referrals - “Water Bottle Stickers” 🔺15 referrals - “Mystery Box”
25 referrals - “Water Bottle” 🔺 40 referrals - “Nuphy Desk Mat”
60 referrals - “Logitech Mouse”