Chances are you’ve heard of ChatGPT, the revolutionary OpenAI system that’s been programmed to respond to prompts and questions in a conversational way. In other words, it’s a chatbot that uses conversational language in back-and-forth dialogue with users. The technology, offered by openai.com, is free to use and it’s been gaining traction and notoriety for its ability to write code, articles, stories, and more, sometimes weirdly, but often with surprising and positive results. Educators fear it because students are using it as a Cheatbot to write full reports, research papers and other assignments. Alarmists are ringing bells about the possibility of sentient or semi-sentient AI running the world ala Skynet.
While most of us can probably agree the technology presents uncertainties, Skynet hasn’t materialized yet and ChatGPT offers plenty of practical application uses. Let’s explore some of the main benefits and drawbacks of this powerful and accessible tool.
Let’s pretend you need to write a generic and overarching series of social media posts to market your business, but you don’t have the time to dream them up and you don’t have the money for a professional social media copywriter. ChatGPT can write these for you. How about a report on the sustainability of starting an apple orchard? ChatGPT can write this for you. What about a college research paper about the Battle at Little Bighorn? No problem! ChatGPT can also write this for you... in minutes. There are some ethics to consider, namely, “Should I use ChatGPT to write my history paper on the Battle of Little Bighorn?” Of course, you shouldn’t. But if you do, be aware that the bot is only as good as the data it can access. In the case of ChatGPT, its data is only accurate through 2021, meaning it’s not the best tool for reporting on anything new, and it doesn’t stay current with constantly changing metrics and news.
The bot pulls from information that was available all over the web through 2021, but OpenAI cannot verify that the data in the files available to ChatGPT is accurate. It currently does not delineate between actual news and “fake” news. If you ask ChatGPT to run a report on the migration of monarch butterflies, it will instantly provide information from sources that might include blogs, Wikipedia articles, hobbyist websites, National Geographic, and more. In the world of academia, the term “peer-reviewed” is used to define information that has been professionally verified by experts. Anything not peer-reviewed cannot be fully verified, and this would include hobbyist websites, blogs, and other readily available website media. This means ChatGPT is capable of being a “fake news” content contributor and a “fake news” generator as defined by the scope and intent of the people using it to churn out content. Think of the negative ramifications this could have in a contested election cycle or in the confines of a country that restricts access to information, such as China. When questioned about accuracy and about its general existence, ChatGPT will defend its sources vigorously. It even begins to sound frustrated and angry.
Setbacks non-withstanding, ChatGPT can be a valuable tool. Marketers can use it to generate instant content but that at this stage of its development, the bot is only able to compile generic content or specific content that is only accurate through 2021. A marketer who wants to reach audiences in constantly changing and highly specific demographics can only effectively use it to generate a content base line. The human marketer still needs to set the metrics, such as which demographic the marketing should target, and they must add the verified and researched modern details to ensure the information provided is accurate. For example, if you are marketing a financial service about stock trades, you can’t rely on 2021 data. You need today’s data. Going back to that report on the Battle of Little Bighorn, you could probably assume most of the data provided by ChatGPT is accurate, but you can’t be completely sure, and if some key piece of evidence furthering that history was discovered after 2021, you would need to add the nugget yourself. If you needed to write an opinion on how the outcome might have been different, you’ll have to write that yourself. A bot can only base things on facts available. It cannot truly form opinions based anything on “what ifs” without solid information.
As for my opinion on fear of ChatGPT, particularly by educators, I believe this type of AI is here to stay and will only get more robust. Will it change the way we work? Yes. Will it directly impact a lot of jobs, including possibly mine (marketing)? Yes. Will it change business workflows? Yes. Will it kick off an AI arms race? You bet. It’s here and it’s not going away. Rather than fear it, we need to understand and – though I hate saying this – embrace it as a tool people will use, including students. Students, for their part, need to forget using it as a cheat. For one thing, high schools, colleges, and universities have tools that run reports through verification systems, and they can tell if students’ content it genuine or copied. For another, no bot, no matter how good, can replace a human being’s critical thinking skills. In all walks of life, use it as a thought-starter. Ask it specific questions to aid in studies, research, and business development. Ask it for a joke if you’re feeling down or need a distraction. Ask it to write a poem about whatever. Ask it for ideas on a social media post, script, blog article, journal entry, case study, or marketing plan. Then, investigate its data to confirm facts and dig in to add relevant details necessary for accurate marketing reach.
Just remember, if you ask it to play, “Global Thermal Nuclear War,” please make sure it’s not connected to NORAD. And if Arnold Schwarzenegger suddenly shows up in your backyard to protect you, just go with it.
Libby is digital specialist with more than 20 years of experience researching and developing marketing plans and content for emerging media.