Adobe Wants AI to Help You Use Photoshop
Adobe has a new blog post up outlining their vision for “agentic AI” in Adobe’s products.
At Adobe, our approach to agentic AI is clear, and it mirrors our approach to generative AI: The best use of AI is to give people more control and free them to spend more time on the work they love – whether that’s creativity, analysis or collaboration.
We’ve always believed that the single most powerful creative force in the world is the human imagination. AI agents are not creative, but they can empower people – enabling individuals to unlock insights and create content that they wouldn’t otherwise be able to and enabling creative professionals to scale and amplify their impact more than ever. For people at all levels, agentic AI’s potential makes starting from templates feel stale and old-fashioned. For professionals, it offers a pathway to growing their careers by freeing up time to do more of the things only they can do.
From Abby Ferguson’s post on DPReview covering this:
Last week, Adobe announced that a handful of AI-based features would be moving out of Premiere Pro beta. Now, the company is teasing even more AI tools for Premiere Pro and Photoshop ahead of Adobe Max London on April 24. In a blog post, the company provides a basic overview of what’s coming, promising even faster edits and helpful tools for learning.
We certainly see “agentic” used a lot these days, but most of the time it’s the retail-fantasy scenario that an LLM agent will buy or book things on your behalf. Capitalism, bebe.
This is more like GitHub Copilot in VSCode where there is a back and forth, with a result that is still something the user has control over if they choose to. The work is in layers, with edits applied in a non-destructive fashion in many cases.
Back to Abby Ferguson:
Adobe says this isn’t exclusively about speeding up the editing process. Instead, it also envisions the creative agent as a way to learn Photoshop. Given how complex and overwhelming the software can be for new users, such a resource could be helpful. Plus, Adobe says it could also handle repetitive tasks like preparing files for export.
One of the major problems I have with generative AI for images and video is that the output is basically clip-art or stock footage. It’s smearing together associated patterns it was trained on and delivering a final result. The only way to continue to edit or refine the result is through text commands which can have sweeping changes on things you did not want to change, and have no easy way to control yourself.
Solutions that integrate with an image, or video, editing workflow allow for the level of control a person might need for their job. In many cases, where some doofus on LinkedIn is asking AI to make them look like an action figure package, or for Ghibli art of their dog, they don’t care about control at all, but that’s because it’s not a job. They don’t answer to a client that wants something nudged, not replaced.
There’s no easy way to directly link to the videos from Adobe’s blog post previewing these things, but the video you want to watch is the second one under the Photoshop subhead.
There are, of course, all of the other issues with generative AI, but this type of work from Adobe is far more interesting than making the smearing machine smear better, or adding new “styles” to the smearing machine so we can all have exactly the same “art”.
It’s important to remember that if everyone can “make” the same stuff, and they all have the same level of non-control over it then there’s nothing that really distinguishes that stuff. Here’s the potential to learn and decide on the changes being made branching off using your own brain and your own skills.
Category: text