
How to Take the Ick Out of Marketing
May 29, 2024As a content strategist, copywriter, and marketer, keeping up with the latest tools is a part of what I do so that I can serve my clients better and more efficiently.
But generative AI is not one of those tools.
And that’s because I still get better results. I’m faster, I’m more accurate, I take less energy, and I’m ethical.
Speaking of ethics, WIRED columnist Reece Rogers addressed a couple of questions on generative AI ethics, and I agree that there is currently no ethical generative AI app. You can read that article to get into the details of the why, but to sum it up, these apps requires inordinate amounts of data to get a somewhat-OK-but-not-really result.
So let’s get into all the reasons why I personally will not use generative AI in my content.
Generative AI hurts the environment.
All that data requires a lot of computing power, and that computer power requires large data centers, and those data centers require a whole lot of cooling, and all that cooling requires a whole of water, a limited resource. Here in Washington State, many data centers are being built in Central Washington.
Along with an increased demand for water, there’s the increased demand of electricity, which is already starting to cause deformations in how we receive electricity, meaning the flow of electricity won’t be reliable, resulting in blackouts and brownouts.
I can tell you that I already experience blackouts and brownouts here in Seattle, more than I did in Orlando which had (maybe still has?) a fragile electrical grid. I was very lucky that I didn’t lost power when we had a ‘bomb cyclone’ here in November 2024, but that windstorm showed that Western Washington’s electrical grid is also fragile.
So those are the environmental impacts. Now for the ethical impacts.
Using Generative AI is inherently unethical.
Generative AI is theft. Developers scrape data from everywhere, with no attribution or compensation. Even if data is licensed, these LLMs (large language models) need way more data to create its outputs.
Think of the proprietary or confidential information that is on your computer or your phone right now. Now think about putting that data into bots like ChatGPT, Google’s Gemini AI, or Microsoft’s CoPilot. If I actively used bots with my data — my ideas, my resume, my IP — it’s murky if that gets uploaded into their LLMs. Or even having these bots on devices without consent or an easy way to opt-in or out. They can still be siphoning up data.
OpenAI CEO Sam Altman admitted that it’d be “impossible” to have popular chatbot, ChatGPT, run without using copyrighted data for free. Meta CEO Mark Zuckerberg callously said that creators “overestimate the value” of their content, to justify Meta’s data scraping. Of course, they’re using this ‘free’ data and content to create products for sale and profit.
Generative AI promotes greed.
And let’s talk about sale and profit. Business is pivoting to generative AI to get rid of jobs. And it’s already happening in the gaming industry (I will get back to gaming in a minute) — and this month, Salesforce is cutting 1,000 jobs while adding AI salespeople.
Those are just a few stories of job loss due to AI. It’s happening in the US federal government right now.
So what are these generative AI users getting for all these wasted energy, security issues, and culled jobs?
Not much. AI slop is our new reality now.
Generative AI creates sloppy art.
There’s so much frankly ugly, uncanny art on social media now (and there’s a reason why it looks like that). And this slop is basically content from other people’s art and music thrown into a blender and made into subpar content. Thankfully, there are tools artists can use to fight back, but this is going to be a long and dirty fight, that will take a lot of time tied up in legal battles.
So about gaming…there’s a narrative gaming app that I have used for years. There was great storytelling and characters that will live in my heart for the rest of my life.
Then the studio got bought out by a company that specializes in generative AI.
My fellow gamers and I are not happy about it. The art quality is poor — very flat and unsophisticated The sprites of characters look deranged. We keep asking the company to stop making generative AI content, but they don’t want to. And I told them as a much in a comment, that we’re at an impasse.
This may be the last month I subscribe to their premium service, because I will not pay for AI slop.
So there’s the slop, and there’s the hallucinations.
Generative AI makes stuff up.
These bots are people pleasers pattern matchers that will make stuff up, even with all that data at its greedy disposal.
Yes, Google search is broken, but it can still do math. Try it.
If you’re using bots like ChatGPT about topics you are researching and are unfamiliar, then who is going to fact check you or the bot if you don’t have an editor or a fact checker?
Imagine a student researching a term paper and using bots like ChatGPT to not only for research, but for writing? First, we’re already hearing from professors and teachers how they can tell when they students use it in their assignments (which is most of them at this point).
Generative AI saps our cognitive power.
We’re also starting to see the effects of what is called cognitive offloading, where we cede our brain power to computers.
We do this already with our smartphones — real quick: do you know your mom’s or favorite loved one’s phone number by heart? (Don’t worry, I won’t tell if you can’t remember the number. I can barely remember my mom’s phone number…but I remember my childhood phone number!)
But young minds need to learn how to think critically, so they can tell real from fake. Add the fact that generative AI which makes up sources and quotes, makes derivative art and music, younger people will increasingly be unable to tell reality from fiction.
Why I am a better choice than Generative AI.
So I said at the top that I’ get better results than a generative AI app. Let me list it out.
- I’m faster. You may be thinking: Deborah, I can put in a prompt and come up with a whole business plan/essay/market research/you name it. How can you be faster? Because I don’t make stuff up or ‘hallucinate’ my content.
- And here’s the kicker: using a bot means I have hired an unverified, unreliable writer that I will now have to edit and fact check. And this is even more editing and fact checking that I’d have to do with a human. With a human, I can give a content/copy brief, give sources, talk about tone, voice, audience. ChatGPT has its own voice and tone, preferred word choice, etc. If I just write it myself, I save time.
- I’m more accurate. It’s what I just brought up. Being inaccurate wastes time. When I work with my clients, I’ll rely on their own previously created content, their preferred primary sources, and their own expertise. In my niches, I’m somewhat of a subject matter expert, so that’s an extra layer of accuracy. Before I was a content strategist, I worked in mental health research for 7 years. So, along with my own educational background (a BA and an MFA), my work helping clinical researchers with looking up articles in the library, and my time in mental health research — I’m a lot more trustworthy of a writer than ChatGPT, because I can show you my sources.
- I take less energy. I should drink more water myself, but the research I did for this article did not take a case of bottle water. Every time I look up something on Google, it doesn’t take a bottle of water like it does for a ChatGPT query. Most of my light bulbs are energy efficient, too. My laptop is smart charging right now.
- I’m ethical. Thanks to my undergraduate education, I became very adept to using and citing primary sources. Even with that in mind, you still have to consider the source. Not every primary source is accurate or reliable. My clients rely on me for being accurate and ethical. Putting disreputable or inaccurate information can harm their reputation, their relationships, and their clients.
What are we going to do with all this future?
Generative AI doesn’t seem to be solving problems that we are unable to solve without it. From what I can see, these half-baked, not-ready-for-market products are doing for their creators is creating an environment for a good ole fashioned money grab.
What makes us different and better than these bots is that we’re creative. We created these bots. And we create the content that these bots feed on. We shape the LLMs.
We’re trying to make these machines in our image…but we’re failing.
Because they can’t think. They have no consciousness. There is no ghost in the machine. There are only echoes of us, especially our biases and preferences.
This tweet by sci-fi and fantasy author Joanna Maciejewska sums up how I and many others feel (96, 000 likes at the time of this writing).
You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
Not everything about the human experience is about infinitely scaling commerce. There are limits. And generative AI has its limits, clearly.
Maybe it’ll get better, but why should it?
Generative AI can’t buy you taste or talent.
Today, as I’ve been writing this blog post, I’ve watching a lot of music videos from the 1980s as a writing companion. During that time, technological innovations like drum machines and synthesizers weren’t initially embraced. People were afraid they would make music soulless.
But as we’re living in the 2020s now, these tools are integral to most popular music. Even more innovative music creation and editing tools like Pro Tools and GarageBand are now commonly used.
Talented musicians like Prince, Eddie Van Halen, and Todd Rundgren were innovating ways of using synthesizers and drum machines in ways we still use today.
I bring those up because synthesizers didn’t put orchestras or other live instrument talents out of business. We have still have plenty of live music.
And, people who played synthesizers and used drum machine still had to be musical. They still had to have talent to use these tools.
What generative AI wants to disrupt is the amount of sweat equity to be talented, which is an impossible task.
The creative arts have been devalued for decades and unfortunately, a lot of us have to decide on perfecting our crafts or supporting ourselves and our families. So we don’t have the time we’d normally to get better at drawing, illustrating, singing, playing an instrument, or writing.
It goes to that tweet by Joanna. I’d like to have more time to do the creative things that time a lot of time and energy.
We should have technology that supports the good that we’re doing, not hurting our planet, stealing from our creative efforts, or eliminating necessary jobs. The powers that be want to further cheapen our human existence with ‘tools’ like generative AI so that we’re available for more mind-numbing work that these bots can’t do yet.
With all the brain power on this planet, you’d think we’d want to use our brains for other things, like making the world a better place. And some of us are doing that, with technological innovations that don’t displace people or gobble up their livelihoods.
Am I Luddite? Yep. But I’m not a technophobe.
This conversation reminds of the term ‘Luddite’ and how that is associated with being a technophobe.
Luddites were weavers and textile workers from the 1800s who were followers of Ned Ludd, a famous weaver and de facto mascot of their movement. Luddites were concerned that the new technology that automated machinery would reduce textile quality and impact their wages and livelihoods. In protest and resistance, they would break these machines and conduct raids.
The UK government eventually brutally shut down the movement, but what part of what remains are people who are committed to resisting generative AI.
And other types of technology, too, but I am listening to music with Bluetooth headphones and I’m typing this on a laptop, next to a fairly new smartphone that could use some charging, with a smartwatch on my wrist.
So although I am a Luddite, I’m not not a technophobe. I love technology when it ethically supports what I do. Luddites didn’t want machines replacing them as workers or to put out shoddier work than they can. That’s what being a Luddite is about.
Ironically, generative AI and a lot of the stuff coming out of Silicon Valley isn’t making our lives better nor is furthering us as a species. I keep seeing innovations that are like, oh, you want to create a library? Or public transportation?
It comes back to just making money, lots and lots of money. And empowering very few people.
Conclusion
I want my content and work to be honest, authentic, and true. I think that’s the best way to help my clients and help the world.
Generative AI brings up many other ethical issues, such as impersonation, personal and corporate liabilities, and political manipulations.
And we’re just not ready to handle it. We can barely handle when someone replies all in an email.
Can we handle a bot that is giving us half-truths and lies based on stolen content and data?
We can’t. And I definitely can’t.
I do wonder if generative AI will ever lap me or best me. I think we’re headed for a bubble bursting soon instead.
Until then, I’m going to hope and pray for Rosey the Robot to come online.
Let’s work together! Book a consultation if you have a burning content question or reach out so we can learn more about each other.