Can a prompt be written to work like script with a for loop, is the premise of this blog. For example, to index a table and evaluate the text in the table “cells” and update corresponding cells. Sort of like key value pairs, for example “step” and “expected results”.
It already sort of does that! Chat will process a list of steps into a set of rows with the above-mentioned columns. Sometimes using a simple prompt and more “Human in the loop” approach is pragmatic. See below.
However, it is important to explore more advanced Prompt boundaries.
The goal from the above example is to tell chat the rules to apply to each cell. This is on top of chat having rules for how to format the table, which is outside the for loop in a sequential programming language. This is on top of it having rules for the main purpose: read the list and make each list item a step and create an expected result, an instruction that too is outside the loop in sequential programming.
Is it better to try to write a “for loop” Prompt or an “object/function call” Prompt. Still working on the latter.
Part of the desire to have a loose set of psudo code as a Prompt language is to give chat ‘technician level’ chores to do. The input to chat is the list written by engineers, ha ha, and so chat has to be able to figure out personal style and just write “copy” for the output.
For example: Sometimes a list is written in the wrong tense, so you have to take care of these things.
Prompt Psudo Code #1
Write a series of action steps and expected results.
Write a two column table. In the first column put the action steps, In the second column put the expected results.
Use the list below, named Repro.
Analyze each sentence from Repro one at a time.
In Repro each sentence will become text in one row in the table. Do not split one sentence from Repro into multiple table rows.
In Repro each sentence will be the action steps to be analyzed to write the expected results text. If there is part of the action steps text that is “expected”, all the text after will be analyzed and added in the writing of the expected results
Use the description of the test actions to write the expected results.
Here is some context for two of the software applications being tested. Softwarename1 and softwarename2 are the names of the Software being tested. Softwarename1 is the acquisition software, Softwarename2 is cloud based application that receives data from Softwarename1.
Repro:
<insert list here>
Results
This prompt, well versions of it, works well. It is not stable, like the simple Prompts also. For example, sometimes it still splits rows. I find sometimes I just use Human in the Loop AI and fix it myself, or even have chat fix it.
For example, if a row of the list is split to multiple table rows, then I tell chat to combine the split rows. Or, I might have chat regenerate the table, and it works. Sometimes these types of mis-processing alerts me to weird punctuation or grammar in the input list.
To be clear, with Human in the Loop work, it pays off to adjust the list as an input, rather than as a post chat results update. But from this point, the need for HumLoop post processing, the question returns to how much Prompt Code can be written or added. How deep can the psudo code metaphor be realized?
API, of course:
I’ve thought of using API to send a series of commands to a chat engine. This is a good simulation of manual testing to automatic. That is, Human in the loop testing updated to an API script to process , and/or pre process, each list element.
Similarly, write the API to process each row individually as a series of API calls.
The bottom line:
However, my money goes to the Chat that will let me chat it into a series of steps, rather than have to drag out some API language. I want Native Chat Prompt Programming 🙂