Analysis of the applicability of large language models to the simulation of realistic dialogues for the purpose of simulating social engineering attacks
The given research is aimed at checking whether synthetic-generated dialogues by LLMs between an «adversary» and a «victim» represent plausibility. To check the credibility of our results, we chose the method of verification using open-source data from U.S. and Russian reports. The actors of dialogues (LLM-agents) were given synthetically generated biographical and personal data, set using prompt engineering techniques, specifically the Persona Pattern. Data from the experiment show a high level of stability and plausibility consistent with ongoing trends in the sphere of social engineering research. Thus, it proves that it is possible to simulate realistic interaction within societal cells with the final goal of computational recreation of social engineering attacks and other related fields using LLMs.