Large language models (LLMs), like ChatGPT and GPT-4, have drawn much interest from academia and business because of their amazing versatility across various activities. They are also being used more often in different other disciplines. It still needs to be completely capable of doing difficult jobs, though. For instance, when writing a lengthy report, the arguments put out, the evidence offered to support them, and the overall structure may only sometimes live up to expectations in certain user contexts. Or, when acting as a virtual assistant for work completion, ChatGPT might only sometimes communicate with users as intended or even act inappropriately in certain professional settings.
LLMs like ChatGPT require careful, quick engineering to be used effectively. The more unpredictable the replies and the longer the prompt refining, the more difficult prompt engineering may be when asking LLMs to do complicated tasks. There is a lag between giving cues and getting replies; people need access to create responses. To close this gap, researchers from Microsoft suggest a novel human-LLM interaction pattern called Low-code LLM, which relates to low-code visual programming, such as Visual Basic or Scratch.
Six specified easy actions on an automatically produced workflow, such as adding or removing, graphical dragging, and text editing, allow users to verify the complicated execution procedures. As seen in Figure 1, the following LLMs can interact with humans: (1) A planning LLM that creates a highly organized process for challenging activities. (2) Users modify the process using built-in low-code actions supported by clicking, dragging, or text editing. (3) An Executing LLM that produces results using the procedure that has been evaluated. (4) Users continue to tweak the workflow until they get happy outcomes. Long-content creation, huge project deployment, task-completion virtual assistants, and knowledge-embedded systems were four complicated tasks for which Low-code LLM was used.
These examples show how the suggested architecture enables users to manipulate LLMs for challenging tasks easily. Low-code LLM provides the following benefits over the typical human-LLM interaction pattern:
1. Generating under Control: Workflows are used to communicate complex tasks to people once they have been broken down into organized conducting plans. To get more manageable results, users can manage the LLMs’ execution using low-code operations. The replies produced after the customized procedure will be closer to the user’s needs.
2. Cordial Communication: Users can quickly understand the LLMs’ execution logic according to the workflow’s intuitiveness, and they can easily adjust the workflow thanks to its low-code operation through a graphical user interface. This reduces the need for time-consuming prompt engineering and enables users to effectively translate their thoughts into comprehensive instructions to produce high-quality solutions.
3. Wide range of use: The suggested paradigm may be used for various challenging tasks across several areas, especially when human judgment or preference is crucial.
Check out the Paper. Don’t forget to join our 19k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.