Using audits to drive change, Mechanics Bank launched with a new CMS and an improved design that follows web best practices. Find out how. 

ChatGPT, Bard, and the New Wave of AI: What it Means for the Web

The path to computer intelligence hasn't proven quite so direct. On the surface, brains and computers seem very similar, but it's clear that computers aren't thinking yet

2/21/2023

Authored by

Categorized

  • Content and IA
  • Strategy

When I was in the 7th grade, I got my first computer — a Radio Shack TRS-80. (This is dating me, since none of those nouns mean anything today.) This computer used your TV as a monitor, and allowed you to code in the BASIC programming language. It was probably less powerful than the computer chip that now lives inside your USB cable. (Yes, there's a little computer in there.)

Despite the paltry power, I was fascinated by the potential. I was able to tell the computer how to "think" with a few basic commands. Fueled by 80's sci-fi, I imagined a future where computers could really think like people. I figured if you could somehow get a computer big enough to read all of the world's knowledge, some form of sentient intelligence would emerge, fight Superman, and start hunting for Sarah Connor.

How we got to ChatGPT.

The path to computer intelligence hasn't proven quite so direct. On the surface, brains and computers seem very similar, but the devil is in the details — something you know if you've ever tried to get Siri or Google Assistant to understand a command outside of their lane. It's clear that the computers aren't thinking yet.

This year, though, ChatGPT and similar systems feel like they've broken through a barrier. With a vast store of knowledge and a conversational approach that seems to understand things on a new level, we've gotten closer to those predictions from the 80s — we've fed in the world's knowledge and now an intelligence is emerging. 

But is that what's really going on? 

ChatGPT comes from a branch of computer science research called Natural Language Processing (NLP). NLP's focus is on teaching computers to parse and understand human language. The type of NLP that ChatGPT uses is called a Large Language Model (LLM). Basically, LLMs work by feeding the computer a lot of text and performing analysis on how words flow together.

A really primitive version of this lives within your phone keyboard's autocomplete. When you type "good" in a text message, your phone might suggest words like "night," "morning," or "luck" as potential words. It recommends these words because they often come after the word "good" in natural conversation. If we analyzed a million documents, these words would far more commonly follow "good" than something like "pineapple" or "from."

So is ChatGPT thinking? 

Tools that employ LLM technology are able to predict and suggest words, ideas, and even larger concepts at a basic prompt. Does this mean that tools like ChatGPT are actually "thinking?"

Well, no. Returning to our example within your phone's autocomplete, you see that continuing to select the recommended autocomplete word will begin to generate some pretty crazy sentences — sentences that make little sense, even if they are technically valid English. 

Recommended autocomplete on an iPhone, showing how models mimic speech by looking for common patterns in text.
Recommended autocomplete on an iPhone, showing how models mimic speech by looking for common patterns in text.

Rather than "thinking" on their own, these models are often seen as a kind of technology-bound parrot; parrots can mimic and repeat human speech, and often may seem like they're listening, when in reality they're simply repeating sounds based on the patterns around them. The parrot is repeating the sounds it hears, and may string them together, but doesn't have any notion of their meanings.

ChatGPT is more advanced than this, but at its core is basically the same: it's essentially a better-trained parrot. It uses a much larger model, and is better at understanding different elements of user input. It's better equipped to understand the most important concepts and parts of speech within a body of text. 

And it "learns" — or, at least, it adjusts its responses and begins to devalue incorrect or troublesome answers based on measurements of the success of previous sessions — often conducted by professional users who rate ChatGPT's output on whether the answer was helpful, sounded good, or was toxic. (Past LLMs have often had a tendency to lean towards vulgar or racist answers — thanks Internet!)

Where do the answers come from?

With an understanding of how LLM-based tools provide answers, the next question might be obvious: where do those answers come from?

Data for these answers is pulled from a "training set," built by extracting a lot of information and content from the internet, like a search crawler. When you ask a question or provide a prompt, the system is essentially looking at its training set for text that has similar concepts to the ones in the prompt, and then uses its language rules to smash bits of that text into a coherent answer.

One major concern is that a coherent answer does not equal a correct answer — answers might confuse data in completely incorrect ways, or even be incorrect in more subtle ways that are harder to capture.

For content creators, there's a more significant concern about how their own content is being used. Many content creators are worried that their content is feeding this machine learning. What's more, the tools that are used to add content to the learning set are often very similar to tools we use to help spread content through search — which means content creators can't really do a lot to block crawlers without potentially harming their own reach.

What's the impact?

Even in its current limited form, AI-generated content can be useful. For example, you can use language learning models like ChatGPT to quickly draft articles or site copy. Rather than rely on the tools to write your content in full, the language model can work those concepts into a rough first draft that you can use as a starting point to avoid writer's block.

An auto-generated AI-assisted article about web performance.
An auto-generated AI-assisted article about web performance.

Unfortunately the downside of this ease-of-use means that we're likely to see a lot of generated content flooding both the web and social media channels. With thousands of generated articles repeating slightly different versions of the same ideas, it will become all the more important to create well-structured and thoughtful content that's targeted at your audience in order to stand out.

Google has recently announced their competing Bard AI, which will follow a similar model to ChatGPT. The real risk for marketers is if Google begins to directly integrate LLM-generated answers into search results, attempting to keep users on their site and discourage them from directly visiting results. It remains to be seen how this might impact search marketing.

So what are our next steps?

The truth is: no one knows, at least not yet.

LLM technology is still evolving, and even with the recent explosion of the new LLM-based tools there's still a lot to learn about its effect on both the web and your content.

At Blend, we always recommend stepping back and reviewing new tools and technologies against your organization's goals. What are those goals? How are you measuring them to ensure success? From there, make adjustments that help reach those goals, regardless of whether they employ ChatGPT.

Remember — if the web is full of content written by AI to fill a word count, much of that content is going to sound similar and impersonal. Meanwhile, your carefully crafted copy will speak directly to your clients and stand out. Even though learning models can let you turn out a lot of content quickly, it takes careful planning and effort to make sure you're producing content that works well.

 

Our thoughts on web strategy.

Read articles on web strategy.

The Truth is in the Red Flags Off-site link

Taylor Lopour

When is it time to rebuild a website? The obvious scenarios include big, hard-to-miss events. But more times than not, there are red flags flying in plain sight, signaling it's time to move on.

December 4, 2023 | 24 Days in Umbraco

How Developer Cross-training Helps Projects Succeed

Nick Cobb

Cross-training developers results in increased productivity, more collaboration, and better results for client projects. 

October 24, 2023

Episode 24: Maintain and Improve (w/ David Hobbs) Off-site link

Corey and Deane discuss the people and rules that help run a website after launch. Then, David Hobbs, author of Website Product Management: Keeping Focused During Change, joins to talk about transferring a site from a project to a product — what that means to keep the site going after launch, where it most often fails, and how to streamline requests and set reasonable expectations for the future of the site.

October 17, 2023 | The Web Project Guide Podcast

Read all web strategy articles.