bredren 9 hours ago

This article seems to be three to six months past due. As in the insights are late.

>One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said.

This seems obvious to me.

I’ve drawn birthday cards for kids where I first use gen AI to establish concepts based on the person’s interests and age.

I’ll get several takes quickly but my reproduction is still an original and appreciated work.

If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.

> “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”

This sounds eerily similar to the messaging around SWE.

I do not see a way past this—-one must rise past prompting and into orchestration.

  • jjulius 9 hours ago

    >... my reproduction is still an original...

    I don't know how accurate that is.

simonw 10 hours ago

The Lord of the Rings: The Return of the King back in 2003 used early AI VFX software MASSIVE to animate thousands of soldiers in battle: https://en.wikipedia.org/wiki/MASSIVE_(software) - I don't think that was controversial at the time.

According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.

  • 101008 10 hours ago

    The problem is not AI per se (which are only a mix of algorithms). The problem is that this new wave of AI is trained in propietary content, and the owners/creators didn't allow it in the first place.

    If this AI worked without training, no one would say anything.

    • brookst 9 hours ago

      > If this AI worked without training, no one would say anything.

      I don’t believe that for one second.

      People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.

      • SoftTalker 9 hours ago

        Yeah I'm not interested in "art" created by a computer. A watercolor by a first-grader is more interesting.

        Same goes for music. If you need AI and autotune, find another way to earn a living.

        • brookst 4 hours ago

          So you need to know the provenance of art before you can decide if it’s interesting?

        • simonw 5 hours ago

          Do you think the Lord of the Rings movies were bad art?

      • NewsaHackO 5 hours ago

        Yea, it definitely is just a convenient argument to people that feel threatened. I in no way feel as though the same internet that has so consistently disregarded copyright laws with such reckless abandon is now clutching their pearls about this.

        • brookst 4 hours ago

          Seriously. What percentage of these pearl-clutchers were mocking the MPAA and Lars Ulrich and supporting Napster as proof that art should be free?

          It’s a high percentage. But time, increased personal wealth, and “OMG this might affect me” all have a lot of power.

    • CuriouslyC 10 hours ago

      People would still be griping about how it devalues the hard work artists have put in, "isn't real art" and all the other things. The only difference is the public at large would be telling them to put a sock in it, rather than having some sympathy because of deceptive articles about how big tech is stealing from hardworking artists.

      • righthand 9 hours ago

        Yes they're two different issues from AI:

        - LLMs were trained on copy protected content and devaluing the input a worker puts into creating original work

        - LLMs are a tool for generating statistical variations and refinements of work, this doesn't devalue the input but makes generating output easier

        Form vs Function issues. So it would be preferable to give people a legal pathway to continue making money and own their work instead of allowing their work to be vacuumed up by the people at corporations looking to automate them away. The functional issue still exists but doesn't put your personal work at risk of theft/abuse outside of it's economic intent. Then the social stigma doesn't really matter because "an LLM is just a tool" is now a solid argument not causing abuse or deterioration of existing legal protections.

    • unstablediffusi 5 hours ago

      their consent was not required. https://en.wikipedia.org/wiki/Transformative_use

      petabytes of training data are transformed into mere gigabytes of model weights. no existing copyright laws are violated. until new laws declare that permission is required, this is a non-argument.

      >If this AI worked without training, no one would say anything.

      adobe firefly was trained on licensed content, and rest assured, the anti-AI zealots don't give it a pass.

      the copyright is just one of the many angles they use to decry the thing that threatens their jobs.

      • yallpendantools 2 hours ago

        There is no final word on the matter yet and there are counterpoints to the "Transformative use" argument.

        https://www.reuters.com/legal/litigation/judge-meta-case-wei...

        > "You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," Chhabria told Meta's attorneys. "You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person."

        > "I just don't understand how that can be fair use," Chhabria said.

        https://ipwatchdog.com/2025/05/12/copyright-office-weighs-ai...

        > Stylistic imitation even without substantial similarity would likely be implicated under such a [market-dilution] theory, which could be considered as a market effect under factor four that diminishes the value of the original work used to train the model.

    • bobxmax 9 hours ago

      I don't know how they verify it, but the article claims the model mentioned ("Moonvalley") trained an entirely clean/licensed data model.

  • sho_hn 10 hours ago

    I'd say the comparison points at misunderstanding the current controversy, though I realize you are doing so deliberately to ask "Is it really that different if you think about it?"

    But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.

    But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".

    This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).

    This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.

    It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.

    I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.

    • simonw 9 hours ago

      I really like your comparison there between reused footage and reused code, where rolling your own password crypto is seen as a mistake.

      There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.

    • bobxmax 9 hours ago

      This is just shifting the goal posts though. I remember people making similar arguments in the early days of Photoshop, digital camera (and what constitutes a "real" photographer), CGI, etc.

      I agree the magnitude of the step change is upsetting, though.

      • sho_hn 9 hours ago

        Right, I agree the sentiment isn't new, I'm mostly just trying to explain that way of thinking.

        But yeah, the tension between placing a value on doing things just in time vs. reducing the labor by using tools or assets has surely always been there in commercial art.

        • bobxmax 9 hours ago

          Agreed. I think it also doesn't help that the AI companies are saying "well they will just get new jobs"

  • mwkaufma 9 hours ago

    This is AI in the gamedev sense, not the present-hype sense.

est31 9 hours ago

Tech companies love to show off that they are using AI, how they are embracing it, etc. Among engineers, there is also a growing community of folks who embrace tools like Cursor, Chat GPT, Gemini, v0, etc.

When it comes to artists, I have less insight but what I see is that they are extremely critical of it and don't like it at all.

It's interesting to see that gap in reactions to AI between artists and tech companies.

  • taylorius 9 hours ago

    Tech people like it because it isn't good enough to completely replace them yet. The sophisticated, coherent architecture of a well designed system is (for now) still beyond the LLMs, so for tech people, it's still just a wonderful tool. But give it another year, and the worm will turn.

    • dingnuts 7 hours ago

      lumberjacks didn't go away when chainsaws were invented; demand for wood rose to meet the falling cost of wood and lumberjacks kept cutting down trees. don't see why it won't be any different for programmers.

  • ordinaryradical 9 hours ago

    I’m an artist and also work in tech. Enjoy using AI for work, no interest in using it for my art.

    Using AI for art is an idiotic proposition for me. If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself. If you don’t enjoy perfecting the sentence, maybe don’t be a writer?

    That’s why there’s a disconnect. I make art for personal fulfillment and the joy of the creative act. To offload that to something that helps me do it “faster” has exactly zero appeal.

    • dmarcos 8 hours ago

      AI will probably enable new workflows and forms of expressions. “Old” ways will still likely be around in some form. Photography didn’t kill portrait painting or movies theater.

    • username223 8 hours ago

      > If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself.

      The same would be true if I were going to use AI to read it. If we just wanted to trade Clif Notes around, why bother with novels at all?

      Cyber-Leo-Tolstoy types a three-page summary of "War and Peace" into ChatGPT and tells it to generate an 800-page novel. Millions of TikTok-addled students ask ChatGPT to summarize the 800-page novel into three pages (or a five-paragraph essay). What is the point of any of this?

  • antithesizer 9 hours ago

    Or between artists in private and artists in public

  • mattl 9 hours ago

    And then there are the rest of us, indie developers who are building for and want to keep building things for the artists.

    We don’t want any of this and are working to build around it.

    It’s being really pushed by a lot of the same people who were pushing Web3 and NFTs and blockchain grifts.

jmugan 10 hours ago

I would imagine that if it is shameful among the established players to use AI, what will happen is that entirely new players will come in. For me, it's the story that matters, and if they can tell a better story with AI, then many people will naturally flock to them.

AndrewKemendo 10 hours ago

I would absolutely love being able to create the movies I’ve always wanted to be made and them be plausibly good.

I wonder who is making the oss version of these tools? So you can specify all the hundreds of parts needed to just compose a decent framework

  • bredren 9 hours ago

    The action I see is happening in comfy-ui workflows. That software is progressing rapidly and adapts to whatever sota models are available.

    Heavy emphasis is on making cutting edge models work with limited local compute.

    • bobxmax 9 hours ago

      Do you know any resources for getting started with ComfyUI? Last time I looked into these tools ages ago it was a complex mess

      • bredren 8 hours ago

        I suggest starting with their tutorial with a decent LLM to help. The workflows can be represented in written syntax which you can export repeatedly, pasting into a chat for feedback from the LLM based on your goal.

        For example, I wanted help setting up the use of a lora and batch iteration. The LLM can figure out where you've hooked things up incorrectly. The UI is funky and the terms and blocks require familiarity you won't have to start.

        I think learning the basics of it this way would be useful because you'll get some positive feedback loop going before trying to make use of someone's shared, complex workflow.

        • bobxmax 7 hours ago

          Thanks - I didn't think of using an LLM to debug the json, that's smart. I'll give it another shot.

  • bobxmax 9 hours ago

    Wasn't Stability working on open sourced models? I wonder what happened to them, I remember some issues with their founder

  • throw_m239339 9 hours ago

    IMHO, before the end of the decade you will absolutely be able to generate entire long form movies just by writing a paragraph in a prompt. And people will not be able to tell the difference.

    Hollywood might save money on the short run but they are doomed to irrelevance on the long run, because you'll have access to the exact same tools as they do.

    Is it good or bad? I don't know, it just is...

    • jplusequalt 9 hours ago

      >Is it good or bad?

      It's bad. Look at what social media and cellphones have done to society and human attention spans.

      There will be a lot of bad shit that will come out of this that won't truly be appreciated until it's already too late to reverse course.

Michelangelo11 10 hours ago

> It might cost only $10 million, but it would look closer to a $100 million movie. "We’re going to blow stuff up so it looks bigger and more cinematic,” he said.

No comment needed.

  • rideontime 10 hours ago

    Would love to hear him define "cinematic."

    • deadbabe 9 hours ago

      Feed it to an AI prompt if you want an answer, no point explaining.

mwkaufma 9 hours ago

> James Cameron teamed up with Stability AI, one of the tech companies making inroads in Hollywood.

... and produced the worst AI-upscales of True Lies and Aliens, to universal scorn from audiences.

ramoz 9 hours ago

My neighbor is a director and is secretly using and in love with Claude for much of their work.

nilirl 9 hours ago

So much 'he says, she says' that I honestly lost track of the point it was trying to make.

Takeaway: Maybe AI good. Maybe AI bad. Scary. But possibility. Everybody try.

throwaway743 9 hours ago

Hiding it? Are they supposed to slap on a disclaimer? Feels like one could safely assume they've been using it.

Anyways, AI generated media is gonna lead to hyper-personalized, on-demand, generated media for people to consume. Sure, hollywood will still be around, but once consumer computing power and the models catch up, there are gonna be a ton of people choosing their own worlds than the ones curated by an industry.

  • aerostable_slug 9 hours ago

    I think the really interesting part will be the illusion of choice presented by these systems. You'll think you've got the reins, but really it's a choose-your-own-adventure book that's effectively constraining you to the experience They want you to have.

    The only way out of this will be HN types who roll their own, and those will probably suck in comparison to the commercial systems filled with product placement and mindblowing amounts of information harvesting.

  • yahoozoo 3 hours ago

    They have to hide it because of the unions.

antithesizer 10 hours ago

When there is a cost to a business from you knowing about something, prepare to be lied to.

calvinmorrison 9 hours ago

We watched the live action Lilo & Stitch reboot yesterday. One thing that struck me was almost every shot was like 2 seconds or less. A lot of camera work for a kids movie... or that's all they could manage to generate?

leptons 10 hours ago

It's a race to the (AI-slop) bottom. But most of the inhabitants of the world will barely notice.

I have to wonder if movies will improve or not with AI, because some really stupid franchises have seen stupid amounts of money, while most people barely watch the actually good creative stuff. We're already swamped with unwatchable schlock, but I'm not sure it will improve if we automate it. It's the same people spending the money to make, promote, and distribute movies, the AI doesn't have any money to make a movie or the impetus. But if most people cared about art, creativity, and good storytelling, there probably wouldn't be a race to the bottom in the entertainment industry.

Idiocracy was a documentary, and "ASS" https://www.youtube.com/shorts/kJZjU2k5abs is what the AI will calculate we want to see, and it will win awards.

  • CuriouslyC 9 hours ago

    AI will make professional level movies cheaper and easier to make, which will the medium more accessible. The AAA movies probably won't look any better, but indie stuff that previously would have been suggestive and bare due to budgetary constraints can now be more direct and lavish. In many cases that's going to be the difference an indie project being a viable film and not.

    • leptons 9 hours ago

      >AI will make professional level movies cheaper and easier to make

      If you're talking about the kind of movies with big-budget explosions and violence, then no thanks. That isn't what I'm talking about at all. Sure, AI will make that schlock cheaper. A lot of the "indie" stuff is garbage, too.

  • jerrysievert 9 hours ago

    I use my instant pot to make risotto in 20 minutes with very little effort that is about 90% as good (in my very stubborn opinion) as making it the hard way. I very much appreciate an amazing risotto, but when I make it myself I'll usually choose the instant pot versus the extra work.

    I feel Hollywood might be the same way.

  • bobxmax 9 hours ago

    I've never understood this argument... most of Hollywood today is already slop lol