Defense Acquisition Research Journal #109

https://www.dau.edu

Prompt Engineering The use of LLMs enabled efficient evaluation of the 580 narrative responses. Using both ChatGPT and Claude added clarity and credibility to the data evaluation, but it was only possible by creating replicable prompts. The iterative prompt design ultimately allowed the analysis to be completed consistently, repeatedly, and at scale. Figure 7 was the prompt used to code the data. Appendices B and C include a complete list of study prompts and sample input Word documents.

FIGURE 7. LLM CODING PROMPT

Prompt: Interpretation-focused coding I'm a researcher conducting a survey on Acquisition Best practices. The following Word document contains a question from a survey. There should be roughly 45 total responses. There may also be fewer respondents if participants skipped the question. All data should be used to formulate codes. Start with an overview of the data. Next, in order of importance, provide at least 10 descriptive codes based on (1) the research question, (2) the frequency of similar responses to the question, and (3) the richness of the responses to the question. Codes should be 4-6 words. To build traceability, provide at least 2-3 quotes from the survey that point to each code. Said differently, provide a code and follow it with direct quotes. Finish by restating the codes with no quotes (just a list). Example below: "-Code 1

--"Quote 1" --"Quote 2" --"Quote 3" -Code 2... Rank Ordered Codes List"

Prompts were copied and pasted into each model input window, and then accompanying Word documents were attached (Figure 8). Note that a separate Word file was created to measure sentiment, and it did not include the tenet—the study researcher wanted to ensure the results between sentiment and alignment were not conflated or confused.

155

Defense ARJ, Summer 2025, Vol. 32 No. 2: 132—193

Made with FlippingBook - Online Brochure Maker