- AI Led Growth
- Posts
- text-to-video State of the Art: try Runway GEN-2 today (step-by-step)
text-to-video State of the Art: try Runway GEN-2 today (step-by-step)
A study driven by an astronaut cat. Actually a not bad generation!
Hello, amazing people! Today I'll give you a step-by-step with examples on the state of the art for text-to-video: RunwayML GEN-2!
You can find it by going here. And, don't worry, it's free for the first 50+ videos. No cc required neither.
Then, click on the highlights below:
Here you have 3 important informations/areas:
Where you will type your prompt
Where the video will be generated (took me between 30s-60s each)
The amount of credits you have left
So, in order to test it somewhat completely, but also not make this a 30-min essay (keeping within that sweet 2-3min spot), I decided to use cats!
(unfortunately not AI-generated… yet)
But, not just cats for cats sake (but that would already be nice). I decided to test 3 aspects:
The simplest context I could see a cat in
The mix of two concepts (cats + astronaut)
Introducing an abstract concept (“reflecting on the meaning of life”).
Let's see how those 3 fared!
Prompt 01:
Cat running in the wild
Result: nightmare fuel, 1/10 😨
Prompt 02:
An astronaut cat in space
Result: actually cute and useful, 7/10 😺🧑🚀
Prompt 03:
A cat reflecting on the meaning of life
Result: Actually much better than I thought it would be, but still somewhat off, 6/10
Main takeaways:
The tooling is nice, but clearly it looks like text-to-image 1 year ago, with Dall-E and its aberrations. So, one can just wonder what it will look like 1 year from now
If you really want/need to use it, i would suggest taking a bunch of videos from the same subject and weaving in a storyline (like this cat, for example).
I didn't mention, but videos are limited to 4s as of today, so also bear that in mind
So, what did you think? Plus, reply if you want us to cover another text-to-video tool 👀
Reply