I posted the other day that I’d like to see everything generated by AI (even in part), to be labeled appropriately.
There are multiple facets to consider. If I was living in fantasy land, I would wish for AI to not be. But there is no putting the genie back in the bottle, so this is a frivolous use of imagination.
So let’s work under the assumption that AI/Gen AI will continue to be popular. Let’s also set aside the environmental impact issue, which people turn a blind eye to. Let us focus on the more philosophical nuances.
I want to clarify that while I’m anti-gen AI, it’s not as if I don’t see the value from a business perspective. I’m also not supposing that everything generated from AI doesn’t have value.
Obviously, as technology develops, things will be made that have novel and interesting uses.
I also see a myriad of uses that AI could be used for that would inarguably benefit humanity; safety systems, medical research and analysis, efficiency in rote (validated processes), warning systems, logistics, and many, many more.
And hopefully, those use cases are being explored. I know for a fact many are.
But what we see spread across the general public is the memification of the technology, which, while arguably benign as the act of an individual, has terrible energy use across the whole of a population, as well as the negative implications regarding the intellectual property that LLMs are built on.
I want to go one step deeper though. I want to speak about something that people have tried to brush off as the ramblings of a few anti-technology fear mongers. One connection recently sent me a message stating, “I agree with a lot of what you say, but would be afraid to come off as a non-adopter.”
To which I respond, I don’t think non-adoption is an option. AI permeates all levels of our society. Again, I think there are great use cases. I live in a grey area. If you’re the sort of AI bro that doesn’t have that kind of nuance, I don’t really want to work with you anyway.
I myself have experimented with data aggregation and summarization. In most cases the AI produced something that looked legitimate. But on closer inspection we see that it synthesized content that took the underlying data as a notional view of what the end product should look like. Several filters, several iterations, several prompt adjustments and the final product was still dubious.
I’m also in several groups that constantly push the boundaries of novel use cases with generative AI and LLMs. Chris Dubois hosts one such community (Dyamic Agency Community) and he is careful, nuanced, and transparent about what he uses AI for. He has foundational expertise that he builds off of. I can’t say the same for many run-of-the-mill marketers.
Tack Insider is another community I’m a part of that also shares relevant and nuanced information daily, including new an novel uses for AI.
While I might have my qualms with some of the uses presented, a lot of these people are showing careful critical thinking and not simply showcasing vanity shortcuts.
But I’ve got real concerns about the plethora of indicators of outsourced thinking in a negative way. And I’m just baffled and exhausted at the amount of fluffy, skin-deep information passed off as “expert” and the mass laughable “custom” communication outbound plays, and the memification (which wastes so many resources so that you can see yourself as an “action figure” or whatever).
We can see the erosion of human critical thinking happening in front of us. This is made doubly evident by the lack of quality control for the massive amount of information that is produced. In a race to produce more, we’re starting to slip away from validation. This is problematic in every field.
As examples:
We’ve also seen the government produce a report and publish it citing studies that never existed.
And while it would be easier to dismiss this as the result of poor validation and oversight of a few, it appears to be a larger issue.
I say this anecdotally, as I have never known a period where more people have announced themselves as “experts” seemingly from thin air. To accompany their expertise, copious new content powered by Generative AI.
And some of the content is fine. It becomes apparent which people are actually expert in their topic area simply by asking one or two specific questions that delve deeper than their fluffy skin deep content.
But I have a problem with it. A problem that can only be solved with a label (I think).
Now. Many people have poo pooed my complaints.
“It’s just a tool.”
One colleague, a person whose content I like, whose expertise is legitimate, used the example of Pro-tools in music. And I admit, I like what modern DAWs (music recording software) does for musicians in terms of ease of use. We’re actually planning on talking later this week about the topic generally and I look forward to it.
There is a line though. For instance, I would never ask my software to generate music to accompany me.
Additionally, when digital photography first arrived on the scene, many photographers were puritanical about digital photography not passing muster. “Film or nothing!” they said.
So the question becomes, where’s the line?
My answer is full of grey areas.
Because the premise is that I want people to be the progenitors of their own stuff, for instance, if you write a really good paragraph describing what you want, provide underlying data, edit the draft of the article that AI wrote with that info, and then publish it to the world…I still think you should disclose that you didn’t actually write the thing …because you didn’t.
You may have provided some of the base-level material, the idea. However, your detailed two-hundred-word prompt is still not the one-thousand-plus-word final product.
Further than that, if you prompt a book, even with detailed outlines, character descriptions, setting details, and scene requirements, and it writes the book. IT wrote the book. YOU didn’t.
I’ve ghostwritten for leaders before. So you might ask, “Matt, do you feel the same way about people who hire ghostwriters and claim authorship?” …and, hell yes, I do, and it’s not something that I’ve ever enforced, because I need to make a buck, BUT it is vanity on the part of my customer.
I don’t care what you say. If you don’t actually write it, then claiming full credit for it IS vanity. We can argue about what we should call it all day long.
But if your digital proxy made something, even indistinguishable, from your voice, and you say “by [insert your name]” it is a lie, because you did not, in fact, produce it.
This is potentially easier to draw a hard line in the visual arts space. You’re never an artist for having AI produce an image from your prompt. Sure, you’re using your imagination…I’ll agree to that. But you’re not really producing anything.
There are a few creators that I really like. Smart, capable guys, who write some really great stuff, that have outsourced a lot of their visuals to AI, and it bums me out. I’d rather they stick with low-budget alternatives and props and produce what is authentically them than what I see. I’m sure they’ll continue to be successful, but it’s a sticky area for me.
I get really prickly and protective about art and the value of art. It’s a wall that I hit when discussing this topic with pure businesspeople.
Anywho. I actually don’t think people who opt to use generative AI should have an issue disclosing their use…unless…
They’re scared that doing so would undermine their credibility…
Which shouldn’t be the case if they are truly experts in the field they claim to be.
If you tell AI to make a slow tempo minior key sonata with three major movements in the style of Mozarts earlier work, but with modern elements of jazz thrown in….it may actually produce something that sounds like that prompt. It may sound alright. It may be a passable piece of music.
But two things.
A person didn’t really make it, so I already hate it.
If you try to say that you’re a concert pianist after you share it…you’re not, and I hate you now.
So I want a label. You’re going to make all the bullshit anyway no matter how much I complain, the least you can do is be honest about the amount of involvement you had in the process.
We can argue about where you’re thing starts and the AI ends. But tools facilitate, and AI generates in this context. I love facilitation. I love efficiency.
But at some point, if you hand off and build a bunch of stuff that you didn’t, that’s no longer simply a tool.
I will PS this by adding the science fiction note…if AI becomes self-aware in a meaningful, provable way, sentient…then my feelings can be changed on the subject because then we are talking about an entity authoring things and I will judge the entity and its works accordingly.
