Creators who embrace AI technology can unlock new worlds of expression, 10x their productivity, and enable 1000x more creators.
I know not all creative professionals share my enthusiasm for AI.
But if history has taught us anything, it’s that technology is inevitable—especially when it’s as powerful as AI.
No matter where you fall on the fear – enthusiasm spectrum of AI, we’re at a pivotal moment in history. And creators, both technically skilled and amateur, have a once-in-a-lifetime opportunity to shape how this technology is used and elevate their own field.
So instead of resisting it, here’s why we, as creators, should embrace using AI in creative work.
When I was making music as a teenager, I wanted to record professional-sounding songs. But in the mid-90s, that required $1,000/day studio rental fees and buying rack-mounted hardware effects like reverb, compression, and phaser for $100s each.
So when I finally saved up $1,300 for the Roland VS-880, it changed my life and the way I could create music.
This tool—the first tool that allowed me to digitally record, mix, edit, and process effects with 8 tracks, all in one place—helped me create things I’d never been able to before due to my lack of resources.
It also unleashed an entirely new wave of creativity in me—all I wanted to do was record songs (my parents probably thought I was insane for locking myself in my room with this thing for hours on end).
What the Roland VS-880 did for me in making music is exactly what the best creative tools do: help scale and democratize creativity.
From the printing press to the iPhone, technology has always unlocked access for more creativity. Generative AI can and will do the same thing for creators today.
As a founder, nothing excites me more than empowering someone who doesn’t consider themself “a creative person” to create. And for self-identified “creative” people, it’s about empowering them to work faster and better. I believe every human is inherently creative, so I broadly divide creators into two tiers.
Two categories of creators:
With generative AI, professionals (#1) will become 10x more efficient and effective, without sacrificing any quality. In fact, the quality will improve.
For amateurs (#2), generative AI will open the door for 1000x more of them to tell their stories.
If we focus on video as a creative medium, we’re already seeing this happen in the consumer space. Video professionals are creating high-production-value videos like commercials and produced events faster and more easily. Simultaneously, the iPhone and platforms like TikTok, Instagram, and YouTube have allowed countless storytellers to create and edit videos without any technical skill.
Since consumer behavior often predicts the future of work (WhatsApp led to Slack, and FaceTime led to Zoom), generative AI is already making a shift into the workplace as well.
At Capsule, we often hear from our customers who are in-house video editors at enterprise and media companies that they’re bogged down by tedious or minor editing requests, like:
This means they’re not spending as much time as they’d like on long-form or high-quality storytelling.
Scott Belsky, founder of Behance and Chief Strategy Officer at Adobe, summarizes why this matters:
“As more human jobs become assisted, automated, or replaced by artificial intelligence, we must spend our hours where we have a competitive advantage over machines: developing new ideas, expressing old things in new ways, innovating process, and crafting the story that infuses our creations with meaning.”
Part of why I’m so excited about our AI Studio product is that it automates many of those tedious tasks of video creation. This not only opens the door into video editing for all creators, but it also gives professionals more time to focus on higher-order tasks and tell better stories.
If you, as a creator, prefer to use a more analog or manual tool as part of your process, there’s nothing stopping you. But intentionally resisting using AI won’t make it go away.
In fact, here are just a few examples of technologies that creators thought would put them out of business or destroy their field:
These technologies not only won out over their resistors, but they also moved creative fields forward. Let’s take a closer look at how this played out for Auto-Tune.
When Auto-Tune gained widespread usage in the late 2000s, it faced severe backlash from many musicians, including Jay-Z, who wrote an entire song called "D.O.A. (Death of Auto-Tune)" (ironically, his voice was recently replicated with a shockingly good AI filter—more on that below).
Today, hardly a single recording artist doesn’t use Auto-Tune, live or in the studio. Countless musicians have used it as a tool to build new sounds and modes of expression. Rolling Stone’s Jody Rosen shared how two artists in particular used Auto-Tune as an effective tool:
“T-Pain taught the world that Auto-Tune doesn’t just sharpen flat notes: It’s a painterly device for enhancing vocal expressiveness, and upping the pathos. In ‘Bad News,’ Kanye’s digitized vocals are the sound of a man so stupefied by grief, he’s become less than human.”
Tools can be inspiring in and of themselves. In the same way that Auto-Tune has become its own sound, AI will become its own “sound” or “brush,” with its own kind of aesthetic—one that looks and feels different from anything else that's ever been created.
For artists and creators, I can't think of anything more exciting than to work on something that’s never been done before. Here are two musicians who are using AI to create entirely new works:
Mr. J. Medeiros, a rapper with the group AllttA, recently published an AI-assisted song called “Savages.”
Medeiros’ bandmate 20Syl discovered an AI text-to-speech site that included a “rapper” filter, and he and Medeiros decided to play around with it, plugging in verses from one of their unproduced songs and applying the Jay-Z filter.
Mr. J. Medeiros and 20Syl, a rap duo under the name AllttA, used an AI filter of Jay-Z’s voice to rap some of the lyrics in their song “Savages.”
Between engineering the vocals over and over and tricking the AI through misspellings and different vowels to say the lyrics correctly, it took a month for them to get the Jay-Z layer right.
“It felt like sampling for the first time again,” said Medeiros, “or taking drum machines and sample loops to new levels by creatively tweaking their official ‘means.’”
He describes “Savages” as an experiment; it’s not monetized or officially released, and he hopes other artists will use AI as a new creative tool, too.
“You can’t fear something you haven’t experienced,” says Medeiros. “The point is to try—to feel, and see, and be open to the ways in which AI can be used in your own process.”
AllttA discovered AI’s possibilities for creating vocal textures, just like they would with a synthesizer. “It’s a tool to be used with different tools,” says Medeiros. “Not the whole, just a piece.”
Jason LaFerrera, an artist and engineer in New York, makes music with computers but is “always striving to make it sound the exact opposite.” He recently created a twangy cover of Aphex Twin’s heartfelt instrumental “Avril 14th.”
To create a visually supportive video for his musical theme, he looked to painters from the American Frontier.
LaFerrera then used Stable Diffusion to generate anchor images in the style of Thomas Moran, generated 10-20 variations of each, and stitched them together into a short video clip, giving the clips a stop-motion, painterly quality.
He then used Capsule's video scripting to cut between clips perfectly in beat with the song.
Jason LaFerrera’s twangy cover of Aphex Twin’s “Avril 14th” was completely computer generated—from the lap-steel guitar sounds and snare drums to the accompanying AI-generated images that form the video.
“I think musicians and artists are hesitant to use AI because they don't understand how it works,” he says. But what excites LaFerrera is how AI is providing world-class production for a fraction of the cost.
His original plan for the video was to shoot footage for it on a cross-country roadtrip. But when the trip was canceled, he turned to other tools. “I can't imagine sifting through that footage after just being able to run some code to get more similar images instantly from Stable Diffusion,” he says.
“We're already seeing 30-second-long TikTok videos that look and sound like a Hollywood production, and that's just going to continue to get better with AI.”
Many of the fears around AI are warranted, but I think the discussions today focus too much on the downside and not enough on the upside. Here are a few ways creators can reframe their fears about generative AI.
As a founder of multiple companies that build creative tools, I’ve always looked for team members with both technical and creative backgrounds.
But today, I’m also looking for candidates who are excited about AI. And it's not just because we're building AI-powered tools.
It's because I believe that the employees who are investing their time into learning how to leverage AI will be the most productive employees of the future. It also signals to me an ability to adapt—essential for a startup employee.
Qualities that will help creators stand out in the age of AI:
If instead you’re resistant to using new tools like AI, the reality is that you may get left behind. Many of my engineering friends got locked into a specific programming language in their career, which is like getting typecast as an actor: you end up doing only “that thing,” you're only known for “that thing,” and once “that thing” is no longer popular or in favor, your career is gone.
Fearing replacement will only make creators more replaceable. Instead, embrace the opportunity to learn a new skillset to help create better work.
The copyright issues around generative AI are complicated, and it will be an ongoing discussion for many years. Should artists have total control over their work? And how?
Creative people are prideful, and they have every right to be. When I created music, I was proud of my work.
But I also don’t view that music as 100% my own, because no creation is truly original. Creations come from countless sources of inspiration and a life of watching other people make music or art.
If you scroll through the subreddits on AI art, you'll notice that these images, ones we've never seen before, are largely inspired by a body of work created by humans over the last 30 years. But so is pretty much everything we create.
So, whether an artist is inspired by a body of work from the ’60s and reinventing it in a new way, or they’re using AI as a co-producer to pull different pieces and mix them into something totally new—it’s all inspiration. And AI is simply another source of inspiration.
I’m personally not worried about my music being used to train these AI models; in fact, I hope my music ends up in some of these models. The idea of hearing another version, a future version, of something I recorded on that Roland VS-880, is staggering to me.
I’ve been pulling from the communal well of human creativity my whole life, and I’m eager to give something back to it.
So much of the generative AI discussion is driven by fear. But creators have been through big technology shifts before, and we’ll do it again, using AI to evolve artistic crafts.
My hope for creators of all skill levels is that they view AI as an opportunity. An opportunity to:
Yes, there will be negative outcomes as a result of AI, as is true of virtually any technology. But incredible, positive things will come out of it as well. And the goal should be for us as humans to steer these technologies toward the good outcomes and not the bad.
If you’re interested in joining the movement of using creative, AI-assisted tools to tell more stories, sign up to join the waitlist for AI Studio here.
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system. added to the rich text element using the "When inside of" nested selector system.added to the rich text element using the "When inside of" nested selector system.
This
This
that
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.