2 min read

Ai Disclosure - is it practical? do we need it?

Ai Disclosure - is it practical? do we need it?
do we need to know if these were ai or real photos?

Should We Always Know When AI is Involved?

Imagine scrolling through your feed and not knowing if that stunning artwork was created by a human or an AI. Is that important to you? AI tools are transforming creative industries, generating everything from photorealistic images to compelling marketing copy. But this progress comes with a dilemma: should we always know when AI is involved? I certainly have generated a lot of content with AI on this platform, and all images have been mostly mid-journey generated

The above header image is from a more technical platform called Civitai that really generates some incredible realism.

Why Tagging AI Matters

Platforms like Instagram are experimenting with tags for AI-generated visuals. This is about trust and accountability. People like Catherine, a university student, will be using an AI to help with an essay. Should she disclose this to her professor? Why or why not? Transparency helps distinguish between human effort and machine efficiency, ensuring AI doesn't undermine originality or ethical standards.

Think about John, a market researcher. He relies on AI to analyze visual trends. How does knowing the source of these visuals impact his interpretations? Tagging AI-generated content isn't just about drawing a line between human and machine—it's about redefining how we perceive creativity itself.

The "AI Signature" and Beyond

The need for an "AI signature" extends beyond images. Emails, articles, even code can be AI-influenced. Should there be a watermark on AI-generated text? Should AI-written code be labeled? How much disclosure is too much? If AI tools are merely assistants, should their involvement even be highlighted?

Perhaps platforms need nuanced disclosure policies:

  • Context is Key: AI disclosures in journalism might require stricter transparency than in creative art forms.
  • Co-creators: AI could be acknowledged as a collaborator, fostering a perception of human-AI partnership.
  • Business: We are already reading and using and producing content in business and education that is AI assisted and generated. Is it practical to track the source and usage of AI tools and what and how much human was involved?

"I'm Not a Robot" – A Question of Identity

The short film "I'm Not a Robot" offers a thought-provoking look at this issue. A woman fails CAPTCHA tests repeatedly, leading her to question her own humanity. The film highlights the blurring line between human and machine and how we validate identity in a world where AI mimics human behavior. This struggle mirrors our own anxieties about AI's growing capabilities and the potential for it to erode what makes us uniquely human.

Should AI Aim to Blend In?

Maybe the goal is seamless integration. If AI performs tasks indistinguishably from humans, it becomes a collaborator, not a competitor. This raises a provocative question: Should the focus shift from disclosure to impact? If AI-generated visuals or articles serve their purpose, does their origin truly matter?

Finding the Balance: Transparency and Integration

The future of AI hinges on balancing transparency and integration. Tagging AI promotes trust, but it shouldn't overshadow AI's value. We should embrace AI as a partner, shaping new forms of creativity.

Challenge: Next time you see a digital artwork, ask yourself: Would my appreciation change if I knew it was created by AI?

As platforms like Instagram introduce AI tags, and films like "I'm Not a Robot" spark debate, how we perceive AI's role in creation will evolve. What new forms of "AI signatures" might emerge? How will we navigate the ethical complexities of AI co-creation?

This article was curated in ChatGPT, and I used my Google Gemini Gem, called the Imbila Copy Writer to review and make improvements. As always I read all the content and edit and publish myself...