03 - CARBON AND CODE - When It Goes Wrong
Misuse, missteps, and what they teach us about trust.
AI doesn’t make ethical decisions.
It doesn’t choose whether to disclose, whether to imitate, or whether to ask permission.
We do.
And when we get lazy, careless, or opportunistic with that power—things break.
Audience trust. Artistic integrity. Legal frameworks. Cultural goodwill.
This entry is about those breaks. Not to name and shame, but to learn. Because the warning signs are already here, and they look a lot like opportunity—until they don’t.
⸻
Case 1: The AI-Generated Children’s Book That Sparked an Industry Debate
In late 2022, a tech founder used ChatGPT and Midjourney to create Alice and Sparkle, a children’s book about a girl who builds a robot. The entire project—from story concept to illustrations—was completed in a matter of days.
It quickly went viral. Then it blew up—for the wrong reasons.
Writers and artists criticized the project for:
Using AI-generated content built on uncredited source material
Passing off the work without context or creative disclosure
Skipping the editorial, developmental, and review processes that ensure children’s literature is thoughtful and appropriate
The creator defended the work as “experimental.”
The backlash said otherwise.
Lesson: Speed doesn’t replace stewardship. “Just trying something” isn’t a free pass when your output reaches a public audience.
⸻
Case 2: AI Fashion Models and the Vanishing Workforce
In 2023, H&M was called out for showcasing AI-generated fashion models in its online catalog. They weren’t avatars. They were made to look real—and no disclaimer was offered until critics flagged the issue.
The response was swift:
Fans accused the brand of erasing representation by creating artificial models with curated skin tones and body types.
Photographers and models pushed back against the growing trend of replacing creative labor with digital composites.
H&M eventually acknowledged the models were synthetic and claimed the intent was to promote “digital sustainability.” But by then, the damage was done.
Lesson: When you replace real people with pixels—without disclosure—you’re not just innovating. You’re deceiving.
⸻
Case 3: The Quiet Ghostwriter
In the content marketing space, it’s become common to use AI to draft LinkedIn posts, newsletters, even thought leadership articles—often signed under an executive’s name.
Some of these pieces go viral.
Few mention the use of AI.
This becomes especially problematic when:
The post makes bold claims or personal reflections AI never experienced
The writing tone shifts dramatically, undermining the brand voice
The audience realizes the “thought leadership” came from a prompt, not a point of view
Lesson: AI can help you write. But if it starts replacing your voice, it’s no longer support—it’s substitution.
⸻
Fumbled the Ball — A Red Flag Checklist
You might be crossing a line if:
You use AI to generate the entire product without curating, editing, or refining it.
You pass off AI-written text as deeply personal or experiential.
You monetize or publish AI content without attribution or process transparency.
You use AI-generated likenesses, styles, or voices without permission or consent.
If you’re unsure whether it’s okay—ask yourself how you’d feel discovering it from the other side.
⸻
It’s Not the Tool. It’s the Silence.
Most of the damage AI causes in creative work doesn’t come from the output itself.
It comes from what isn’t said:
Missing Disclosure
This is the most straightforward: AI was used significantly in the creation of the work, but the creator didn’t tell the audience.
Examples:
A thought leadership post written 80% by AI but shared under a personal brand with no context.
A design portfolio that uses Midjourney imagery but presents it as hand-drawn concept art.
Why It’s Problematic: The audience assumes human authorship. When they discover the truth, it can feel deceptive—even if the content itself is solid. This breaks the implicit trust between creator and consumer.
⸻
Invisible Assist
This is more nuanced: AI was used meaningfully behind the scenes, but the creator implies it was fully human-led.
Examples:
A “writer’s” personal essay refined significantly by AI to improve tone and narrative flow—but no acknowledgment.
A music producer who lets AI generate vocal harmonies, then passes it off as manual arrangement.
Why It’s Problematic: While not as overt as full ghostwriting, this creates a false impression of skill or effort. It’s not necessarily unethical in all contexts, but it becomes dicey when used to position oneself as an expert, artist, or thought leader.
⸻
Implication of Human Effort Where There Was None
This is the most deceptive of the three: AI did most of the heavy lifting, but the creator presents the work as deeply personal or manual.
Examples:
An AI-generated memoir or blog post about a personal hardship the author never experienced.
A completely AI-rendered illustration submitted to a traditional art gallery with no mention of its origin.
Why It’s Problematic: It not only misrepresents authorship, but it can exploit emotional authenticity or mislead gatekeepers who are evaluating work under human effort standards. This is where lines of ethics and even professional integrity can be fully crossed.
AI doesn’t destroy trust.
People do—when they choose convenience over clarity.
⸻
What Makes the Difference?
Accountability doesn’t kill creativity.
It anchors it.
And in each of these cases, a little extra honesty would have gone a long way.
In the next part of Carbon and Code, I’ll turn the lens on myself. Where do I use AI in my process? Where don’t I? And how do I walk the line without stepping over it?
Because if we’re going to expect better from others, we should start by being honest with ourselves.
⸻
Call to Action:
Have you seen a public example of AI misuse—or caught yourself crossing a line?
Let’s learn from it. Share your thoughts or links in the comments using #CarbonAndCode.