We know that artificially intelligent computers and robots are beginning to replace humans in manufacturing, distribution, mining, and many other sectors of the economy. But they’ll never replace people who create the things that make us human, like art and literature, right?
A slew of new AI software programs can create digital artwork, graphic images, videos, news articles, creative copy, poetry, prose, and more. These programs are still in their early stages of development, but they’re already quite impressive. Want a portrait of your dog done in the style of Picasso? Done. Need an attention-grabbing first line for your short story. They’ll give you some options. Need help writing a college essay? They got you covered.
Here are just a few recent articles highlighting what these new programs can do:
Then there’s ChatGPT, an experimental software program that takes virtual assistants like Siri or Alexa to entirely new levels. A prototype of ChatGPT, a product of Open AI, the same company that created the DALL-E art generator referenced in many of the articles above, became available to the public on November 30, 2022. The company, in effect, is inviting people to test its prototype. Writers are using it to help them do research, create ideas for their short stories and novels, improve opening lines and endings, and write essays. Some writers are reportedly using it to rewrite their novels. Here are a few recent articles about ChatGPT.
It’s not hard to imagine that AI programs like DALL-E and ChatGPT may someday give artists and writers a run for their money and jobs. I prefer to think they’ll propel human creativity to new levels.
Like all technological advances, however, these new software programs could have disturbing and even dangerous consequences in the wrong hands. AI-generated news stories could wreak havoc with financial markets, stir civil unrest, or even start wars. Creative AI programs could someday put thousands of people who create content of all varieties – website designers, copywriters, editors, publicists, graphic artists, advertising creative departments, video producers and editors, and screenwriters, to name a few – out of work. The market for AI-generated artwork might eclipse that for human-created artwork down the road. AI-generated novels might someday outsell those written by humans, putting human writers out of business. Publishers in the future might prefer hassle-free, cost-efficient AI-generated books to the often tense and cost-inefficient dance that goes on between writers, editors, and publishing executives.
Although the new ventures into creative AI beg plenty of ethical and moral questions, it will be interesting to see how this new field unfolds and how it will impact human life. Being a glass-half-full person, I hope that AI creative generators will enhance and extend human creativity, not replace it. But we must be on our guard. Life is already complicated with fake, phishing emails and texts and human-generated misinformation and disinformation distributed by bots on social media. When AI-generated images, news, books, etc., that look real and sound real, start filtering into the media mainstream, a lot more can, and likely will, go dangerously wrong. I pray that as Silicon Valley and other entrepreneurs roll out these transformative technologies, they will think about the abusive and unintended consequences, as well as the positive benefits to humanity, and protect the public against the malevolent ones. Perhaps the most thoughtful article I’ve read lately about generative AI appeared in the New York Times in October and is bylined by Kevin Roose, a technology columnist and author of Futureproof: 9 Rules for Humans in the Age of Automation.
The article highlights one of the more controversial generative AI startups, Stability AI, which had recently launched its Stable Diffusion, an open-source, generative AI program. Roose reports on the launch party he attended for this no-guardrails open-source platform. Before the party, Stability Diffusoun and some of its open-source offshoots had spawned a flood of offensive images that caused Reddit to shut down several of its forums. Roose reported that Stability AI tried to control the situation by telling its users not to “generate anything you’d be ashamed to show your mother.” However, it did not establish stricter filters for generated material, which other non-open-source companies use. Apparently, the generative AI companies that use filters are concerned that Stability AI and other open-source rivals will destabilize the prospects for the industry by attracting attention from federal regulators and Congress. Two quotes from Roose’s article have stayed with me.
The first is from the founder of Stability Inc., Emad Mostaque, who explained why he believes giving billions of people open access to generative AI is a good thing:
“So much of the world is creatively constipated, and we’re going to make it so that they can poop rainbows.”
Mostaque’s quote made me wonder if “Making the World Poop Rainbows” was going to be his company’s mission statement.
The second quote was from another tech executive at the launch party who said: “You can’t put the genie back in the bottle.”
I do think the generative AI genie is out of the bottle, and the twists and turns of this new industry will be full of challenges that beg ethical, moral, and philosophical questions. The impending age of generative AI is already generating lots of interesting story ideas in my mind. This is not the last time I’ll blog about it.