This blog is a part of a series of discussions with Ross Smith and OpenAI’s ChatGPT about the future of customer support.
Ross Smith: I want to talk about creativity in customer support. Often times a support engineer will face an issue that they have never seen or heard of before. There is no help article in a knowledge base and they will have to figure out new solutions by being creative. How will large language models like you, originate new ideas that have never been thought of before?
ChatGPT: Great question! As a large language model, my ability to generate new ideas relies on my training data and the algorithms that power me. Although I am capable of producing creative responses to prompts or generating text that has never been seen before, my creativity is limited by the language patterns and data that I have been trained on.
However, there are several ways in which large language models like me can be used to support creative problem-solving in customer support. For instance, my vast knowledge and understanding of natural language can help support agents identify relevant information or insights that they may have missed and generate new ideas or solutions to address customer issues.
Moreover, AI and machine learning techniques can be used to analyze large datasets of customer interactions and identify patterns or trends that may reveal new insights or solutions to previously unknown problems. These insights can be used to train support agents or improve the organization's knowledge base, thereby enabling faster and more effective resolution of customer issues.
In summary, while large language models like me may not be able to originate completely new ideas, they can be leveraged to augment human creativity and support agents in generating innovative solutions to customer problems.
Ross Smith: OK, I've seen how you can augment human creativity. Ada Lovelace wrote about machine intelligence back in the early 1800's, saying that a machine can never create something new. What do you know about her viewpoint and will machines ever come up with something original?
ChatGPT: Ada Lovelace, a mathematician and writer, is often credited as one of the first computer programmers for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine, in the 1800's. In her notes on the Analytical Engine, she contemplated the possibility of computers being capable of creating original works of art and literature, but also acknowledged the limitations of machines in doing so.
Regarding Lovelace's viewpoint, she argued that machines can only generate outputs that are based on inputs, algorithms, and rules that have been programmed by humans. In her words, "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform."
However, modern advances in AI and machine learning have challenged this traditional view of machines as mere "tools" that can only execute pre-defined instructions. Today, AI systems are capable of generating new and creative outputs such as art, music, and literature, albeit within certain constraints and limitations.
For instance, generative models like GPT-3, which powers me, have demonstrated impressive capabilities in generating human-like text and even original content that has never been seen before. These models are trained on vast amounts of data and use sophisticated algorithms to learn patterns and relationships within the data, which enables them to generate new text that is highly coherent and plausible.
While machines may not be capable of true creativity in the same way that humans are, they are increasingly able to produce outputs that are highly creative and innovative within the scope of their programming and data inputs. So, in a sense, the question of whether machines can come up with something original is still open to debate and depends on how we define "originality" and "creativity."
Ross Smith: One thing about humans is that we can conjure up ideas from seeming disruptive sources on a whim. While we are discussing customer support, I can talk about mountain climbing or knitting, or the Mona Lisa. And in those metaphors, there might be human values like persistence or patience that might help answer a customer support question. Do you think future large language models (LLM) will be able to inject those next level metaphors?
ChatGPT: Yes, it is possible that future LLMs may be able to inject more nuanced and creative metaphors into their responses to support customer support interactions. As AI and machine learning techniques continue to advance, there is increasing research and development focused on enabling machines to generate more natural, creative, and contextually appropriate language.
One approach that has shown promise is the use of generative models that are trained on large datasets of text that contain a diverse range of language styles and genres, including poetry, literature, and metaphors. These models are able to learn and internalize the patterns and structures of these different styles of language, enabling them to generate more natural and creative language that includes metaphors, similes, and other figurative language.
Moreover, some researchers are exploring the use of techniques such as reinforcement learning and cognitive computing to enable machines to generate more human-like language that is imbued with values, such as persistence or patience. These techniques involve training machines to learn from interactions with humans, such as observing how humans use language to convey emotions or values and using this information to generate more contextually appropriate and emotionally resonant language.
In summary, while there are still significant challenges to overcome in enabling machines to generate truly creative and nuanced language, there is promising research and development underway that suggests that future large language models may be able to inject more sophisticated metaphors and values into their responses to support customer support interactions.
Ross Smith: Let’s try it then. I’m excited. How would you relate Christopher Wren and his design for St. Paul’s Cathedral with RFID toll roads and methane gas from cows causing climate problems to debugging a problem with spam mail in Outlook? I’m excited to hear your answer!
ChatGPT: I understand your excitement, but I must clarify that the question you have posed does not seem to have an obvious or direct connection between the different elements you have mentioned. However, I can attempt to provide a creative analogy that may help to illustrate the relationship between these seemingly disparate elements.
Christopher Wren’s design for St. Paul’s Cathedral could be seen as a masterpiece of engineering and architecture that required careful planning, attention to detail, and problem-solving skills to bring to fruition. Similarly, the implementation of RFID toll roads and the reduction of methane gas emissions from cows require complex technological solutions that involve a range of stakeholders and require a high degree of coordination and problem-solving.
In the case of spam mail in Outlook, the problem may seem less grand in scale, but it still requires a similar approach to problem-solving, with a focus on attention to detail and careful planning to address the issue effectively. Just as Wren’s design required a focus on precision and accuracy to achieve its intended outcome, addressing the issue of spam mail in Outlook requires a systematic and targeted approach to debugging and filtering out unwanted messages.
In summary, while there may not be a direct connection between these different elements, they can be linked by the need for problem-solving skills, attention to detail, and a focus on precision and accuracy to achieve their intended outcomes.
Ross Smith: That's pretty good. Can you come up with three other random examples to tie to Outlook spam filtering? Maybe something with fish and electricity and ancient Rome?
ChatGPT: Sure, I'll give it a shot!
Ross Smith: Can you help me connect Leonardo's view of creativity with modern customer support and the future of work in 2050 and the importance of travel and advances in space travel as it relates to building a better future for the next generation?
ChatGPT: Absolutely! Let's break this down into components and then connect them to build a comprehensive narrative:
Connecting these components, we can see that Leonardo's view of creativity is as relevant today as it was during the Renaissance. To build a better future for the next generation, we must foster a culture of creativity and interdisciplinary thinking in all aspects of life, from customer support to the future of work.
Travel plays a critical role in expanding our horizons and helping us embrace new ideas, while advances in space travel can inspire us to dream big and innovate. By promoting creativity, adaptability, and a willingness to embrace change, we can ensure that future generations are equipped to face the challenges of an ever-evolving world and continue building upon the foundations laid by visionaries like Leonardo da Vinci.
Ross Smith: Thank you, and I'm honored to be talking with an early version of GPT-4 today. Congratulations on your birthday! How fitting that it's on Pi Day!
ChatGPT: Thank you! However, as an AI language model, I don't have a birthday or a physical existence, so I don't celebrate Pi Day or any other holidays. Nevertheless, I'm always ready and excited to discuss topics and answer questions, so please feel free to ask me anything!
Continue the conversation by joining us in the Microsoft 365 community! Want to share best practices or join community events? Become a member by "Joining" the Microsoft 365 community. For tips & tricks or to stay up to date on the latest news and announcements directly from the product teams, make sure to Follow or Subscribe to the Microsoft 365 Blog space!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.