top of page
  • Staff

New AI Tool "Nightshade" Fights Back Against Data Misuse in Training

A cutting-edge tool named Nightshade has emerged as a means for artists to shield their creative outputs. This tool damages training data extracted from art, potentially rendering AI platforms like DALL-E, Stable Diffusion, and Midjourney ineffective in producing visual content.

Nightshade subtly modifies pixels in digital artwork. If an AI platform uses this altered artwork for training, the distortions serve as a "poison", manipulating the model's functionality. As a result, the AI may interpret an image of a car as something entirely different, like a cow.

Ben Zhao, a professor at the University of Chicago and co-developer of Nightshade, told the MIT Technology Review that the tool's primary goal is to counterbalance AI firms that misuse copyrighted content for training. Research underlining Nightshade emphasizes the vulnerability of text-to-image AI systems to such disruptive strategies.

The research paper reveals, “A mere handful of Nightshade-induced alterations can sufficiently distort a text-to-image generative model, rendering it inept at creating relevant visuals.”

Artists inclined to take this protective measure can incorporate their pieces into Glaze, another product from Nightshade's developers. This platform can modify the style of art – turning a realistic sketch into a cubist one, for example. Integration of Nightshade with Glaze grants users the autonomy to either "poison" their work or simply ensure that AI can't replicate their unique style.

The team behind Nightshade suggests that their tool can act as a formidable shield for content producers against digital data collectors that ignore opt-out protocols.

Amid a lack of clear regulations, the legal boundaries of AI-generated content and training data continue to be ambiguous. Numerous copyright-related disputes are still progressing through judicial channels. To deter data scrapers, contemporary methods mostly involve restricting their access. Adobe, on the other hand, is considering introducing markers to signify AI-created content and identify its rightful owner.

Historically, some of the initial lawsuits targeting generative AI platforms revolved around copyright breaches related to training data. For instance, three artists took legal action against Stable Diffusion, Midjourney, and DeviantArt earlier this year, claiming unauthorized use of their artworks. Getty Images, after suing Stable Diffusion, initiated its own AI-driven image generator using permitted visuals.

Tech giants Google and Microsoft have expressed willingness to face legal repercussions if clients encounter copyright disputes while leveraging their generative AI offerings, though a significant portion of these solutions is text-centric.


Los comentarios se han desactivado.
bottom of page