In this session, we explore the essentials of workflows, aiming to understand their nature, usage, and underlying principles to help you craft your own effective workflows. On display is what we refer to as a builder prompt, which documents the process you wish to automate. This prompt is then transformed by Copy AI into a series of AI-generated steps that constitute your workflow.
There are several methods for executing or utilizing workflows. The CSV import strategy is one approach, allowing you to upload a CSV file in bulk to execute numerous workflows simultaneously. After the workflows run, you can then export the results for further use. This method is notably the simplest way to take advantage of workflows.
Another method involves using forms. You can either utilize the form URL provided by Copy AI or embed an iframe into a webpage or Notion document. This integration facilitates the embedding of workflows onto any site, enabling you to manage the user experience directly. A significant advantage here is that workflows become accessible not only to members within your organization but also to external users.
For those seeking deeper integration, APIs offer a solution. Workflows can be designed to be activated or utilized by other systems through their individual API endpoints. Additionally, webhooks can be set up to receive workflow outputs, which can then be processed in your subsequent system operations.
Now, let's delve into some of the fundamental concepts of Copy AI workflows, beginning with prompting. Constructing a prompt should be viewed as coding for large language models. Prompts serve as the sole input we typically use to guide models in performing tasks or creating content on our behalf. The beauty of prompts lies in their accessibility and tunability—you don't need to be a seasoned programmer to create effective prompts. Using natural language, anyone can generate prompts that enable workflows to perform remarkable tasks.
In the realm of foundational models, the precision of language is crucial. By using the right words, specifying critical details, or providing the appropriate output structure, we can significantly enhance the efficacy of these models. Prompts are the key to unlocking this potential; they are adaptable tools that can be tailored to execute a wide array of complex tasks. Today, we will demonstrate this adaptability through a series of actions within a workflow. For further insights into specific actions within workflows, we encourage you to explore the action library.
Prompts are also inherently composable. This means that they can be linked together to perform a sequence of smaller tasks, which collectively enable the completion of much more sophisticated objectives. Prompt engineering is essentially the craft of continuously refining these prompts to optimize their performance for a particular task. When constructing workflows, what we are essentially doing is engineering prompts. This is based on not only our own best practices but also those established by our company, and tailored to the tasks we aim for the models to execute.
Despite some similarities to coding, working with prompts offers a more forgiving experience. A syntax error in code may prevent it from compiling or cause an error message, but with models, even if there's a mistake in the prompt, it is likely that some form of output will still be generated. By understanding that prompts are mechanisms through which we can direct models to fulfill our desired functions, and by recognizing the importance of word choice, we can dramatically improve the performance of these models within any given workflow.
Let's delve into the different types of prompts that are part of a Copy AI workflow. For example, in a chat interface, you communicate in real-time with a large language model to complete a specified task. The input provided in this context directly influences the output, such as receiving an answer to a question or generating the desired result for a requested task.
Within workflows, however, there are two distinct types of prompts. The first is the builder prompt, which is what you provide to the workflow builder engine to create a workflow. This involves defining the process you wish to automate. The purpose of the builder prompt is to generate a workflow that encapsulates the desired process through a series of steps.
Next, we have the step detail prompts. Within each workflow action or step, step detail prompts are used to define the specific aspect of your process for that particular action. A significant advantage of using the builder prompt is that, based on the definition of your process, the workflow builder can automatically generate the step detail prompts for you.
To begin with, customization is key when it comes to optimizing the outputs of language models (LMs) within workflows. Delving into the core concepts, we find that many models in use are completion models. Their primary function is to continue a sentence, consistently predicting the next word in a sequence. This capability is what renders them incredibly powerful. However, it also exposes them to potential errors, such as "hallucinations," where the model may produce inaccurate information based on the input provided.
Large language models gain their proficiency from analyzing vast quantities of data. This extensive training enables them to excel at recognizing and completing patterns or sentences. Nevertheless, it's important to note that their ability to complete patterns is often stronger than their ability to ensure absolute accuracy. For instance, when prompted to answer a question, a model will attempt to provide a response, and if asked to include citations, it may even generate fictitious ones.
To mitigate this, we employ a technique known as grounding. Grounding essentially equips the models with the necessary data to answer questions or complete patterns with a higher degree of accuracy. It's akin to preparing for an exam by studying the relevant materials—you're more likely to perform well if you're well-informed.
Moreover, models tend to perform better when they are focused on a single task rather than juggling multiple tasks simultaneously. This specialization, which we refer to as task focus, allows for greater control over the model's effectiveness in completing the given task.
When constructing a workflow, it's beneficial to consider the specificity of each prompt or task. It's preferable to err on the side of providing more detailed instructions rather than fewer. This approach brings us to the concept of a prompt chain.
A prompt chain is a sequence of tasks designed to guide the model in a step-by-step manner. For example, one might start with a task to summarize blog posts. Following this initial step, the model would then utilize the generated summary to identify the target audience for the post. By breaking down the process into clear, focused tasks, the model can navigate through the workflow efficiently and with greater precision.
When constructing a LinkedIn post about how generative AI is revolutionizing marketing, we face a choice between two approaches. The first approach involves a prompt asking the AI to draft a post about the impact of generative AI on marketing. However, this method relies heavily on the AI's ability to independently decipher what aspects of generative AI are most pertinent to marketers.
The second approach takes a more structured path. It begins by instructing the AI to review a specific blog post and extract relevant details about how generative AI enhances marketer productivity. Using the insights gained, the AI is then tasked with drafting a LinkedIn post. This method incorporates two critical strategies: task focus and grounding. By focusing on a single task—extracting information from the blog post—we ensure that the AI is honed in on the exact content we need. Grounding, on the other hand, ensures the AI centers its attention on the most crucial element, which is the role of generative AI in heralding a new era for marketers.
To improve the efficacy of the first approach, we could add the link to the blog post within the original prompt. This addition applies the grounding concept, providing the AI with a specific resource to base its content on. However, this approach still lacks task focus, as it requires the AI to simultaneously draft the post and comprehend the blog post, a process that may lead to vagueness due to the dual objectives.
We find that the second approach is more effective. It is built on prompt chaining, leveraging task-focused instructions to enhance the quality of each output step. By directing the AI to undertake a focused review of the blog post before crafting the LinkedIn post, we can produce a more targeted and relevant message about the transformative power of generative AI for marketers.
Go try out the Challenges in Community.copy.ai to practice these recommended methods.