What’s Next for Generative AI? Expectations for 2024 and beyond…

Bill Hart-Davidson
5 min readDec 30, 2023

--

Bill H-D at the Bellagio Gardens, Las Vegas, February 2023

In 2023, with the widespread release of Generative AI services, we saw a growing level of anxiety about and experimentation with the automation of creative work. This is perhaps the first time we’ve seen automation threaten to disrupt professional and creative class workers’ job prospects in ways that have already impacted jobs in manufacturing and parts of the service sector.

In 2024, I expect this trend to continue to intensify as we see companies and organizations make some calculated moves to “outsource” both repetitive information processing and creative tasks to Generative AI. This will likely lead to a shrinking and tightening workforce in knowledge work jobs that may or may not persist as we see our work routines change.

In some professional areas, like the law, I think we’ll see a rather quick level of adoption and a fairly dramatic change in work habits and routines that could reconfigure job categories. We may not have fewer lawyers, but we might have fewer paralegals and legal assistants. And those that remain may be working with AI all day, every day.

What else might we expect for 2024? Here are a few things to watch for:

1) More GenAI services in more places. This year, the conversation has been dominated by one service — OpenAI’s ChatGPT — but all of the big tech companies have LLMs now and we will see more services debut this year. It will be interesting to watch the ways these providers try to differentiate their services as they attempt to compete with one another for market share.

2) Rather than being stand alone services in “demo” mode, LLMs will be embedded more and more in the software and devices you use. In writing software like Googledocs and Microsoft Office, and in platforms like Instagram and Snapchat. They’ll also find their way into devices that are internet-enabled like your kitchen appliances and your car…you might soon get emails from them!

3) Enhanced capability to work with multiple media formats. Currently, there are services that can generate just about any type of output — text, images, sound, video — from a text prompt. But you are often using specific websites for each type of media — MidJourney or DALL-E for images, ChatGPT for text, etc. Soon, we’ll see these become more integrated, one-stop-shops for all kinds of input AND output!

Got a video you want some background music for? Share it and ask for a soundtrack that matches the on-screen action and…voila!

4) GenAI Enterprise Editions will allow companies and organizations to use GenAI more confidently and securely. With enterprise editions, organizations will be able to take advantage of GenAI services with more assurance that their data will be safe, secure, and private. And the use of GenAI services will not put them at risk of non-compliance with regulations like HIPAA like the current commerical versions may. This trend is mostly a business-oriented one, but there are echoes here of format wars for VCRs, internet browsers, and search engines from past eras. It will be a major area of competition for the largest GenAI “foundation model” providers and the results may well determine which of these models eventually becomes the most widely used.

Not everything in 2024 will seem like an advance in the technology. We’ll also see a few trends that I expect will dampen enthusiasm and slow down the saturation of GenAI into our daily lives.

5) One big obstacle that is coming: more lawsuits and regulation. Companies that hold copyright and distribute creative content are suing and lobbying for regulatory action to protect their interests. Both for licensing material that the companies have already used to train their models (and which they may have harvested without permission) and also for generating content that is protected by copyright. For example, using AI to create new scenes of SpongeBob Squarepants. We are already seeing some noticeable effect on the way OpenAI’s services work due to legal action underway, and it can noticeably limit the way the services work. For example, users will see ChatGPT or DALL-E refuse a request to reproduce something under copyright.

6) A glut of cheap, but pretty bad content floods our streams and inboxes. When the costs of producing new marketing copy, low-fi chillwave tunes, or small business logos drops to almost zero, we can expect a flood of this material. It may well overwhelm services that have allowed individual creative people to sell artwork, music, etc. for a while. Helping sellers and buyers sort wheat from chaff will become a new, lucrative goal for these platforms and marketplaces. And we may even see a resurgence of value for more local, “organic” creative content and products begin to emerge.

7) Misinformation & disinformation risks will persist and become more widely known. And we may start to develop some new critical thinking skills and habits in response to these.

The best known risk now is related to accuracy. This is the risk of “hallucinations” — moments when the LLM gets facts wrong because the language model, alone, does not have a way to check facts. That requires other training and human feedback. Some models have it and some don’t, and some versions of the models are better than others.

But we will also see another risk escalate. For motivated spreaders of disinformation, I expect the ability to produce and distribute content at a massive scale because it is cheap and easy to make will become a more widely known problem. In online systems that allow any user to answer questions or post responses for instance, there is significant risk for spreading false information that can have a big impact on search results, etc. rather quickly.

8) In my own world of education, we will continue to see adaptation of learning activities and tests in response to the improvement of AI to, well, do your homework for you!

Early in 2023, schools rushed to change assignments and assessment strategies that could be affected. Most also updated policies related academic dishonesty to include rules for using, or avoiding, LLMs as part of coursework. Today, most schools acknowledge that we must help students learn in a world where LLMs are currently a part of the picture and will likely be important, one way or another, in the future to the work that they will do. This means helping students learn how to use them effectively, ethically, with some critical thinking to counter the risks.

--

--

Bill Hart-Davidson
Bill Hart-Davidson

Written by Bill Hart-Davidson

Hyphenated, father, academic, juggler, cyclist, cook. Philosophy of life: give.

Responses (1)