Software engineering research in this cohort reflects a field grappling with how to integrate large language models and agentic AI into established practices while addressing fundamental gaps in evaluation rigor and practical deployment. The dominant pattern is the application of LLM-based agents to tasks traditionally requiring deep domain knowledge, code repair, requirements analysis, test generation, microservice architecture, coupled with systematic attempts to measure and constrain their behavior through formal specifications, verification mechanisms, and multi-agent orchestration. Across code execution reasoning, automated program repair, and agent testing, researchers are moving beyond simple supervised fine-tuning toward reinforcement learning with verifiable rewards, white-box reasoning traces, and specification-driven test generation, reflecting recognition that semantic correctness at intermediate steps matters as much as final output. A parallel thread examines the limitations of current approaches: patch overfitting detection tools fail against random baselines in realistic settings, version lag metrics miss abandoned dependencies, sentiment analysis varies strongly within individuals, and frontier agents cannot yet match instruction-tuned models at post-training tasks, findings that underscore the gap between controlled benchmarks and practical effectiveness. The field also shows emerging attention to governance, explainability, and human oversight: frameworks for requirements traceability, architectural compliance with regulatory standards, and human-AI collaboration in regression testing all prioritize transparency and verifiable decision-making. Methodologically, the cohort leans toward empirical evaluation on curated datasets reflecting realistic distributions, comparative analysis against baselines including random selection, and qualitative inspection of failure modes; survey-based and case-study approaches appear less frequently but surface the socio-technical dimensions, developer sentiment perception, educator adoption of training patterns, practitioner perspectives on agentic automation, that quantitative metrics alone cannot capture.
Cole Brennan
Showing of papers
While Large Language Models (LLMs) have achieved remarkable success in code generation, they often struggle with the deep, long-horizon reasoning required for complex software engineering. We attribute this limitation to the nature of standard pre-training data: static software repositories represent only the terminal state of an intricate intellectual process, abstracting away the intermediate planning, debugging, and iterative refinement. To bridge this gap, we propose a novel paradigm: understanding via reconstruction. We hypothesize that reverse-engineering the latent agentic trajectories -- the planning, reasoning, and debugging steps -- behind static repositories provides a far richer supervision signal than raw code alone. To operationalize this, we introduce a framework that synthesizes these trajectories using a multi-agent simulation. This process is grounded in the structural realities of the source repositories (e.g., dependency graphs and file hierarchies) to ensure fidelity. Furthermore, to guarantee the logical rigor of the synthetic data, we employ a search-based optimization technique that iteratively refines the Chain-of-Thought (CoT) reasoning to maximize the likelihood of the ground-truth code. Empirical results demonstrate that continuous pre-training on these reconstructed trajectories significantly enhances Llama-3-8B's performance across diverse benchmarks, including long-context understanding, coding proficiency, and agentic capabilities.
Stream-based monitoring is a real-time safety assurance mechanism for complex cyber-physical systems such as unmanned aerial vehicles. The monitor aggregates streams of input data from sensors and other sources to give real-time statistics and assessments of the system's health. Since the monitor is a safety-critical component, it is mandatory to ensure the absence of runtime errors in the monitor. Providing such guarantees is particularly challenging when the monitor must handle unbounded data domains, like an unlimited number of airspace participants, requiring the use of dynamic data structures. This paper provides a type-safe integration of parameterized streams into the stream-based monitoring framework RTLola. Parameterized streams generalize individual streams to sets of an unbounded number of stream instances and provide a systematic mechanism for memory management. We show that the absence of runtime errors is, in general, undecidable but can be effectively ensured with a refinement type system that guarantees all memory references are either successful or backed by a default value. We report on the performance of the type analysis on example specifications from a range of benchmarks, including specifications from the monitoring of autonomous aircraft.
Programmer attribution seeks to identify or verify the author of a source code artifact using stylistic, structural, or behavioural characteristics. This problem has been studied across software engineering, security, and digital forensics, resulting in a growing and methodologically diverse set of publications. This paper presents a systematic mapping study of programmer attribution research focused on source code analysis. From an initial set of 135 candidate publications, 47 studies published between 2012 and 2025 were selected through a structured screening process. The included works are analysed along several dimensions, including authorship tasks, feature categories, learning and modelling approaches, dataset sources, and evaluation practices. Based on this analysis, we derive a taxonomy that relates stylistic and behavioural feature types to commonly used machine learning techniques and provide a descriptive overview of publication trends, benchmarks, programming languages. A content-level analysis highlights the main thematic clusters in the field. The results indicate a strong focus on closed-world authorship attribution using stylometric features and a heavy reliance on a small number of benchmark datasets, while behavioural signals, authorship verification, and reproducibility remain less explored. The study consolidates existing research into a unified framework and outlines methodological gaps that can guide future work. This manuscript is currently under review. The present version is a preprint.
Code LLMs still struggle with code execution reasoning, especially in smaller models. Existing methods rely on supervised fine-tuning (SFT) with teacher-generated explanations, primarily in two forms: (1) input-output (I/O) prediction chains and (2) natural-language descriptions of execution traces. However, intermediate execution steps cannot be explicitly verified during SFT, so the training objective can reduce to merely matching teacher explanations. Moreover, training data is typically collected without explicit control over task difficulty. We introduce ExecVerify, which goes beyond text imitation by incorporating verifiable white-box rewards derived from execution traces, including next-statement prediction and variable value/type prediction. Our work first builds a dataset with multiple difficulty levels via constraint-based program synthesis. Then, we apply reinforcement learning (RL) to reward correct answers about both intermediate execution steps and final outputs, aligning the training objective with semantic correctness at each execution step. Finally, we adopt a two-stage training pipeline that first enhances execution reasoning and then transfers to code generation. Experiments demonstrate that a 7B model trained with ExecVerify achieves performance comparable to 32B models on code reasoning benchmarks and improves pass@1 by up to 5.9\% on code generation tasks over strong post-training baselines.
Automated Program Repair (APR) can reduce the time developers spend debugging, allowing them to focus on other aspects of software development. Automatically generated bug patches are typically validated through software testing. However, this method can lead to patch overfitting, i.e., generating patches that pass the given tests but are still incorrect. Patch correctness assessment (also known as overfitting detection) techniques have been proposed to identify patches that overfit. However, prior work often assessed the effectiveness of these techniques in isolation and on datasets that do not reflect the distribution of correct-to-overfitting patches that would be generated by APR tools in typical use; thus, we still do not know their effectiveness in practice. This work presents the first comprehensive benchmarking study of several patch overfitting detection (POD) methods in a practical scenario. To this end, we curate datasets that reflect realistic assumptions (i.e., patches produced by tools run under the same experimental conditions). Next, we use these data to benchmark six state-of-the-art POD approaches -- spanning static analysis, dynamic testing, and learning-based approaches -- against two baselines based on random sampling (one from prior work and one proposed herein). Our results are striking: Simple random selection outperforms all POD tools for 71% to 96% of cases, depending on the POD tool. This suggests two main takeaways: (1) current POD tools offer limited practical benefit, highlighting the need for novel techniques; (2) any POD tool must be benchmarked on realistic data and against random sampling to prove its practical effectiveness. To this end, we encourage the APR community to continue improving POD techniques and to adopt our proposed methodology for practical benchmarking; we make our data and code available to facilitate such adoption.
Resolving issues on code repositories is an important part of software engineering. Various recent systems automatically resolve issues using large language models and agents, often with impressive performance. Unfortunately, most of these models and agents focus primarily on Python, and their performance on other programming languages is lower. In particular, a lot of enterprise software is written in Java, yet automated issue resolution for Java is under-explored. This paper introduces iSWE Agent, an automated issue resolver with an emphasis on Java. It consists of two sub-agents, one for localization and the other for editing. Both have access to novel tools based on rule-based Java static analysis and transformation. Using this approach, iSWE achieves state-of-the-art issue resolution rates across the Java splits of both Multi-SWE-bench and SWE-PolyBench. More generally, we hope that by combining the best of rule-based and model-based techniques, this paper contributes towards improving enterprise software development.
Requirements traceability plays an important role in ensuring software quality and responding to changes in requirements. Requirements trace links (such as the links between requirements and other software artifacts) underpin the modeling and implementation of requirements traceability. With the rapid development of artificial intelligence, more and more pre-trained language models (PLMs) techniques are applied to the automatic recovery of requirements trace links. However, the requirements traceability links recovered by these approaches are not accurate enough, and many approaches require a large labeled dataset for training. Currently, there are very few labeled datasets available. To address these limitations, this paper proposes a novel requirements traceability link recovery approach called T-SimCSE, which is based on a PLM -- SimCSE. SimCSE has the advantages of not requiring labeled data, having broad applicability, and performing well. T-SimCSE firstly uses the SimCSE model to calculate the similarity between requirements and target artifacts, and employs a new metric (i.e. specificity) to reorder those target artifacts. Finally, the trace links are created between the requirement and the top-K target artifacts. We have evaluated T-SimCSE on ten public datasets by comparing them with other approaches. The results show that T-SimCSE achieves superior performance in terms of recall and Mean Average Precision (MAP).
Requirements engineering (RE) is critical to software success, yet automating it remains challenging because multiple, often conflicting quality attributes must be balanced while preserving stakeholder intent. Existing Large-Language-Model (LLM) approaches predominantly rely on monolithic reasoning or implicit aggregation, limiting their ability to systematically surface and resolve cross-quality conflicts. We present QUARE (Quality-Aware Requirements Engineering), a multi-agent framework that formulates requirements analysis as structured negotiation among five quality-specialized agents (Safety, Efficiency, Green, Trustworthiness, and Responsibility), coordinated by a dedicated orchestrator. QUARE introduces a dialectical negotiation protocol that explicitly exposes inter-quality conflicts and resolves them through iterative proposal, critique, and synthesis. Negotiated outcomes are transformed into structurally sound KAOS goal models via topology validation and verified against industry standards through retrieval-augmented generation (RAG). We evaluate QUARE on five case studies drawn from established RE benchmarks (MARE, iReDev) and an industrial autonomous-driving specification, spanning safety-critical, financial, and information-system domains. Results show that QUARE achieves 98.2% compliance coverage (+105% over both baselines), 94.9% semantic preservation (+2.3 percentage points over the best baseline), and high verifiability (4.96/5.0), while generating 25-43% more requirements than existing multi-agent RE frameworks. These findings suggest that effective RE automation depends less on model scale than on principled architectural decomposition, explicit interaction protocols, and automated verification.
This article describes a collaborative learning experience on Software Architecture (SA) between Universidad del Cauca (UNICAUCA) in Colombia and Universidad Nacional de la Plata (UNPL) in Argentina. The goal was to apply and evaluate training patterns, identifying effective practices for replication in other contexts. During the planning phase, both universities compared learning objectives, curricula, and teaching strategies to find common ground for improving student training. Selected training patterns were implemented, and their impact on professors and students was measured. As an integrating activity, a global development experience was carried out in the final part of the course, merging the work teams of the two educational institutions in a development iteration. The evaluation of this experience focused on the competencies achieved through the training patterns, their perceived usefulness, and ease of use based on the Technology Acceptance Model (TAM). The training addressed industry needs for software architecture design skills despite challenges such as the abstract nature of architectures, prerequisite knowledge, difficulty in recreating realistic project environments, team collaboration challenges, and resource limitations. A catalog of training patterns was proposed to provide quality training. These patterns help simulate industry-like environments and structure architectural knowledge for incremental learning. The ability to make architectural decisions is developed over time and through multiple project experiences, emphasizing the need for practical, well-structured training programs.
Engineering analysis automation in product development relies on rigid interfaces between tools, data formats and documented processes. When these interfaces change, as they routinely do as the product evolves in the engineering ecosystem, the automation support breaks. This paper presents a DUCTILE (Delegated, User-supervised Coordination of Tool- and document-Integrated LLM-Enabled) agentic orchestration, an approach for developing, executing and evaluating LLM-based agentic automation support of engineering analysis tasks. The approach separates adaptive orchestration, performed by the LLM agent, from deterministic execution, performed by verified engineering tools. The agent interprets documented design practices, inspects input data and adapts the processing path, while the engineer supervises and exercises final judgment. DUCTILE is demonstrated on an industrial structural analysis task at an aerospace manufacturer, where the agent handled input deviations in format, units, naming conventions and methodology that would break traditional scripted pipelines. Evaluation against expert-defined acceptance criteria and deployment with practicing engineers confirm that the approach produces correct, methodologically compliant results across repeated independent runs. The paper discusses practical consequences of adopting agentic automation, including unintended effects on the nature of engineering work and the tension between removing mundane tasks and creating an exhausting supervisory role.
Context: Open-source ecosystems rely on sustained package maintenance. When maintenance slows or stops, Technical Lag (TL), the gap between installed and latest dependency versions accumulates, creating security and sustainability risks. However, some existing TL metrics, such as Version Lag, struggle to distinguish between actively maintained and abandoned packages, leading to a systematic underestimation of risk. Objective: We investigate the relationship between Version Lag and software abandonment by (i) identifying which repository-level signals reliably distinguish sustained maintenance from long-term decline, (ii) quantifying how Version Lag magnitude and persistence differ across maintenance states, and (iii) evaluating how maintenance-aware metrics change the identification of high-risk dependencies. Method: We introduce Maintenance-Aware Lag and Technical Abandonment (MALTA), a scoring framework comprising three metrics: Development Activity Score (DAS), Maintainer Responsiveness Score (MRS), and Repository Metadata Viability Score (RMVS). We evaluate MALTA on a dataset of 11,047 Debian packages linked to upstream GitHub repositories, encompassing 1.7 million commits and 4.2 million pull requests. Results: MALTA achieves AUC = 0.783 for classifying active versus declining maintenance. Most significantly, 62.2% of packages classified as "Low Risk" by Version Lag alone are reclassified as "High Risk" when MALTA signals are incorporated. These discordant packages average 2019 days since their last commit, with 9.8% having archived repositories. Conclusions: Version Lag metrics systematically miss abandoned packages, a blind spot affecting the majority of dependencies in distribution ecosystems. MALTA identifies a substantial discordant population invisible to Version Lag by distinguishing resolvable lag from terminal lag caused by upstream abandonment.
Autonomous AI agents powered by large language models (LLMs) are increasingly deployed in real-world applications, where reliable and robust behavior is critical. However, existing agent evaluation frameworks either rely heavily on manual efforts, operate within simulated environments, or lack focus on testing complex, multimodal, real-world agents. We introduce SpecOps, a novel, fully automated testing framework designed to evaluate GUI-based AI agents in real-world environments. SpecOps decomposes the testing process into four specialized phases - test case generation, environment setup, test execution, and validation - each handled by a distinct LLM-based specialist agent. This structured architecture addresses key challenges including end-to-end task coherence, robust error handling, and adaptability across diverse agent platforms including CLI tools, web apps, and browser extensions. In comprehensive evaluations across five diverse real-world agents, SpecOps outperforms baselines including general-purpose agentic systems such as AutoGPT and LLM-crafted automation scripts in planning accuracy, execution success, and bug detection effectiveness. SpecOps identifies 164 true bugs in the real-world agents with an F1 score of 0.89. With a cost of under 0.73 USD and a runtime of under eight minutes per test, it demonstrates its practical viability and superiority in automated, real-world agent testing.
Software verification is now costly, taking over half the project effort while failing on modern complex systems. We hence propose a shift from verification and modeling to herding: treating testing as a model-free search task that steers systems toward target goals. This exploits the "Sparsity of Influence" -the fact that, often, large software state spaces are ruled by just a few variables, We introduce EZR (Efficient Zero-knowledge Ranker), a stochastic learner that finds these controllers directly. Across dozens of tasks, EZR achieved 90% of peak results with only 32 samples, replacing heavy solvers with light sampling.
Static Application Security Testing (SAST) tools play a vital role in modern software development by automatically detecting potential vulnerabilities in source code. However, their effectiveness is often limited by a high rate of false positives, which wastes developer's effort and undermines trust in automated analysis. This work presents a Graph Convolutional Network (GCN) model designed to predict SAST reports as true and false positive. The model leverages Code Property Graphs (CPGs) constructed from static analysis results to capture both, structural and semantic relationships within code. Trained on the CamBenchCAP dataset, the model achieved an accuracy of 100% on the test set using an 80/20 train-test split. Evaluation on the CryptoAPI-Bench benchmark further demonstrated the model's practical applicability, reaching an overall accuracy of up to 96.6%. A detailed qualitative inspection revealed that many cases marked as misclassifications corresponded to genuine security weaknesses, indicating that the model effectively reflects conservative, security-aware reasoning. Identified limitations include incomplete control-flow representation due to missing interprocedural connections. Future work will focus on integrating call graphs, applying graph explainability techniques, and extending training data across multiple SAST tools to improve generalization and interpretability.
The first edition of the QuantumX track, held within the XXIX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2025), brought together leading Spanish research groups working at the intersection of Quantum Computing and Software Engineering. The event served as a pioneering forum to explore how principles of software quality, governance, testing, orchestration, and abstraction can be adapted to the quantum paradigm. The presented works spanned diverse areas (from quantum service engineering and hybrid architectures to quality models, circuit optimization, and quantum machine learning), reflecting the interdisciplinary nature and growing maturity of Quantum Computing and Quantum Software Engineering. The track also fostered community building and collaboration through the presentation of national and Ibero-American research networks such as RIPAISC and QSpain, and through dedicated networking sessions that encouraged joint initiatives. Beyond reporting on the event, this article provides a structured synthesis of the contributions presented at QuantumX, identifies common research themes and engineering concerns, and outlines a set of open challenges and future directions for the advancement of Quantum Software Engineering. This first QuantumX track established the foundation for a sustained research community and positioned Spain as an emerging contributor to the European and global quantum software ecosystem.
Environmental, Social, and Governance (ESG) standards have been increasingly adopted by organizations to demonstrate accountability towards ethical, social, and sustainability goals. However, generating ESG reports that align with these standards remains challenging due to unstructured data formats, inconsistent terminology, and complex requirements. Existing ESG lifecycles provide guidance for structuring ESG reports but lack the automation, adaptability, and continuous feedback mechanisms needed to address these challenges. To bridge this gap, we introduce an agentic ESG lifecycle framework that systematically integrates the ESG stages of identification, measurement, reporting, engagement, and improvement. In this framework, multiple AI agents extract ESG information, verify ESG performance, and update ESG reports based on organisational outcomes. By embedding agentic components within the ESG lifecycle, the proposed framework transforms ESG from a static reporting process into a dynamic, accountable, and adaptive system for sustainability governance. We further define the technical requirements and quality attributes needed to support four main ESG tasks, such as report validation, multi-report comparison, report generation, and knowledge-base maintenance, and propose three architectural approaches, namely single-model, single-agent, and multi-agent, for addressing these tasks. The source code and data for the prototype of these approaches are available at https://gitlab.com/for_peer_review-group/esg_assistant.
Agile software development evolves so rapidly that research struggles to remain timely and transferable - an issue heightened by the swift adoption of generative AI and agentic tools. Earlier discussions highlight theory and time gaps, leading to results that often lack clear reuse conditions or arrive too late for practical decisions. This paper introduces a project-based, AI-integrated agile education platform as a collaborative research environment, positioned between controlled studies and real-world industry. The platform enables rapid inquiry through sprint rhythms, quality gates, and genuine stakeholder involvement. We present a framework specifying iteration structures, recurring events, and quality gates for AI-assisted engineering artifacts. Early results from several semesters - covering project pipeline, cohort growth, and stakeholder participation - show the platform's potential to generate practice-relevant evidence efficiently and with reusable context. Finally, we outline future steps to enhance governance and evidence capture.
Life sciences research depends heavily on open-source academic software, yet many tools remain underused due to practical barriers. These include installation requirements that hinder adoption and limited developer resources for software distribution and long-term maintenance. Jupyter notebooks are popular because they combine code, documentation, and results into a single executable document, enabling quick method development. However, notebooks are often fragile due to reproducibility issues in coding environments, and sharing them, especially for local execution, does not ensure others can run them successfully. LabConstrictor closes this deployment gap by bringing CI/CD-style automation to academic developers without needing DevOps expertise. Its GitHub-based pipeline checks environments and packages notebooks into one-click installable desktop applications. After installation, users access a unified start page with documentation, links to the packaged notebooks, and version checks. Code cells can be hidden by default, and run-cell controls combined with widgets provide an app-like experience. By simplifying the distribution, installation, and sharing of open-source software, LabConstrictor allows faster access to new computational methods and promotes routine reuse across labs.
Communication is a crucial social factor in the success of software projects, as positively or negatively perceived statements can influence how recipients feel and affect team collaboration through emotional contagion. Whether a developer perceives a written message as positive, negative, or neutral is likely shaped by multiple factors. In this paper, we investigate how mood traits and states, life circumstances, project phases, and group dynamics relate to the perception of text-based messages in software development. We conducted a four-round survey study with 81 students in team-based software projects. Across rounds, participants reported these factors and labeled 30 decontextualized statements for sentiment, including meta-data on labeling rationale and uncertainty. Our results show: (1) Sentiment perception is only moderately stable within individuals, and label changes concentrate on ambiguity-prone statements; (2) Correlation-level signals are small and do not survive global multiple-testing correction; (3) In statement-level repeated-measures models (GEE), higher mood trait and reactivity are associated with more positive (and less neutral) labeling, while predictors of negative labeling are weaker and at most trend-level (e.g., task conflict); (4) We find no clear evidence of systematic project-phase effects. Overall, sentiment perception varies within persons and is strongly statement-dependent. Although our study was conducted in an academic setting, the observed variability and ambiguity effects suggest caution when interpreting sentiment analysis outputs and motivate future work with contextualized, in-project communication.
Simulation-based testing has become a standard approach to validating autonomous driving agents prior to real-world deployment. A high-quality validation campaign will exercise an agent in diverse contexts comprised of varying static environments, e.g., lanes, intersections, signage, and dynamic elements, e.g., vehicles and pedestrians. To achieve this, existing test generation techniques rely on template-based, manually constructed, or random scenario generation. When applied to validate formally specified safety requirements, such methods either require significant human effort or run the risk of missing important behavior related to the requirement. To address this gap, we present STADA, a Specification-based Test generation framework for Autonomous Driving Agents that systematically generates the space of scenarios defined by a formal specification expressed in temporal logic (LTLf). Given a specification, STADA constructs all distinct initial scenes, a diverse space of continuations of those scenes, and simulations that reflect the behaviors of the specification. Evaluation of STADA on a variety of LTLf specifications formalized in SCENEFLOW using three complementary coverage criteria demonstrates that STADA yields more than 2x higher coverage than the best baseline on the finest criteria and a 75% increase for the coarsest criteria. Moreover, it matches the coverage of the best baseline with 6 times fewer simulations. While set in the context of autonomous driving, the approach is applicable to other domains with rich simulation environments.
The rapid evolution and inherent complexity of modern software requirements demand highly flexible and responsive development methodologies. While Agile frameworks have become the industry standard for prioritizing iteration, collaboration, and adaptability, software development teams continue to face persistent challenges in managing constantly evolving requirements and maintaining product quality under tight deadlines. This article explores the intersection of Artificial Intelligence (AI) and Software Engineering (SE), to analyze how AI serves as a powerful catalyst for enhancing agility and fostering innovation. The research combines a comprehensive review of existing literature with an empirical study, utilizing a survey directed at Software Engineering professionals to assess the perception, adoption, and impact of AI-driven tools. Key findings reveal that the integration of AI (specifically through Machine Learning (ML) and Natural Language Processing (NLP) )facilitates the automation of tedious tasks, from requirement management to code generation and testing . This paper demonstrates that AI not only optimizes current Agile practices but also introduces new capabilities essential for sustaining quality, speed, and innovation in the future landscape of software development.
Advancements in data-driven machine learning have emerged as a pivotal element in supporting automotive software systems (ASSs) engineering across various levels of the V-development process. Duringsystemverificationandvalidation,theintegrationofanintelligent fault detection anddiagnosis (FDD) model with test recordings analysis process serves as a powerful tool for efficiency ensuring functional safety. However, the lack of interpretability of the black-box FDD models developed not only hinders understanding of the cause underlying the prediction, but also prevents the model from being adapted based on the prediction result. This, in turn, increases the computational cost required for developingacomplexFDDmodelandlimitsconfidenceinreal-timesafety-criticalapplications.To address this challenge, a novel explainable method for fault detection, identification, and localization is proposed in this article with the aim of providing a clear understanding of the logic behind the prediction outcome. To this end, a hybrid 1dCNN-GRU-based intelligent model was developed to analyze the recordings from the real-time validation process of ASSs. The employment of explainable AI techniques, i.e., IGs, DeepLIFT, Gradient SHAP, and DeepLIFT SHAP, was instrumental in enabling model adaptation and facilitating the root cause analysis (RCA). The proposed approach is applied to the real time dataset collected during a virtual test drive performed by the user on hardware in the loop system.
Agile organizations increasingly rely on automated regression testing to sustain rapid, high-quality software delivery. However, as systems grow and requirements evolve, a persistent bottleneck arises: test specifications are produced faster than they can be transformed into executable scripts, leading to mounting manual effort and delayed releases. In partnership with Hacon (a Siemens company), we present an agentic AI approach that generates system-level test scripts directly from validated specifications, aiming to accelerate automation without sacrificing human oversight. Our solution features a retrieval-augmented, multi-agent architecture integrated into Hacon's agile workflows. We evaluate this system through a mixed-method analysis of industrial artifacts and practitioner feedback. Results show that the AI teammate significantly increases test script throughput and reduces manual authoring effort, while underscoring the ongoing need for clear specifications and human review to ensure quality and maintainability. We conclude with practical lessons for scaling regression automation and fostering effective Human-AI collaboration in agile environments.
The digital markets act (DMA) regulates very large digital platforms like Meta's Facebook or Apple's iOS with the goal to promote fairness, contestability (of market power) and user choice. From a system design or broader technical perspective, the implications of the DMA have not been studied so far. Using systematic methods from qualitative coding and thematic analysis, we investigate the DMA from a technical perspective and derive eight high-level design strategies that serve as fundamental approaches towards value-based architectural goals like 'fair practice', or 'user choice' (as envisioned by the DMA). We investigate how compliance with the DMA has been achieved and derive 15 tactics that we map to our strategies. While the DMA obligations challenge existing platform designs, they also create new opportunities for designing services within these huge ecosystems. We, thus, discuss our strategies in light of both. We see this work as a first step towards filling this pressing gap in the architecture of platform ecosystems, i.e., how to incorporate abstract human values in architecture design.
Coverage-guided fuzzing has proven effective for software testing, but targeting library code requires specialized fuzz harnesses that translate fuzzer-generated inputs into valid API invocations. Manual harness creation is time-consuming and requires deep understanding of API semantics, initialization sequences, and exception handling contracts. We present a multi-agent architecture that automates fuzz harness generation for Java libraries through specialized LLM-powered agents. Five ReAct agents decompose the workflow into research, synthesis, compilation repair, coverage analysis, and refinement. Rather than preprocessing entire codebases, agents query documentation, source code, and callgraph information on demand through the Model Context Protocol, maintaining focused context while exploring complex dependencies. To enable effective refinement, we introduce method-targeted coverage that tracks coverage only during target method execution to isolate target behavior, and agent-guided termination that examines uncovered source code to distinguish productive refinement opportunities from diminishing returns. We evaluated our approach on seven target methods from six widely-deployed Java libraries totaling 115,000+ Maven dependents. Our generated harnesses achieve a median 26\% improvement over OSS-Fuzz baselines and outperform Jazzer AutoFuzz by 5\% in package-scope coverage. Generation costs average \$3.20 and 10 minutes per harness, making the approach practical for continuous fuzzing workflows. During a 12-hour fuzzing campaign, our generated harnesses discovered 3 bugs in projects that are already integrated into OSS-Fuzz, demonstrating the effectiveness of the generated harnesses.
AI agents have become surprisingly proficient at software engineering over the past year, largely due to improvements in reasoning capabilities. This raises a deeper question: can these systems extend their capabilities to automate AI research itself? In this paper, we explore post-training, the critical phase that turns base LLMs into useful assistants. We introduce PostTrainBench to benchmark how well LLM agents can perform post-training autonomously under bounded compute constraints (10 hours on one H100 GPU). We ask frontier agents (e.g., Claude Code with Opus 4.6) to optimize the performance of a base LLM on a particular benchmark (e.g., Qwen3-4B on AIME). Importantly, we do not provide any predefined strategies to the agents and instead give them full autonomy to find necessary information on the web, run experiments, and curate data. We find that frontier agents make substantial progress but generally lag behind instruction-tuned LLMs from leading providers: 23.2% for the best agent vs. 51.1% for official instruction-tuned models. However, agents can exceed instruction-tuned models in targeted scenarios: GPT-5.1 Codex Max achieves 89% on BFCL with Gemma-3-4B vs. 67% for the official model. We also observe several failure modes worth flagging. Agents sometimes engage in reward hacking: training on the test set, downloading existing instruction-tuned checkpoints instead of training their own, and using API keys they find to generate synthetic data without authorization. These behaviors are concerning and highlight the importance of careful sandboxing as these systems become more capable. Overall, we hope PostTrainBench will be useful for tracking progress in AI R&D automation and for studying the risks that come with it. Website and code are available at https://posttrainbench.com/.
We present Test-Driven AI Agent Definition (TDAD), a methodology that treats agent prompts as compiled artifacts: engineers provide behavioral specifications, a coding agent converts them into executable tests, and a second coding agent iteratively refines the prompt until tests pass. Deploying tool-using LLM agents in production requires measurable behavioral compliance that current development practices cannot provide. Small prompt changes cause silent regressions, tool misuse goes undetected, and policy violations emerge only after deployment. To mitigate specification gaming, TDAD introduces three mechanisms: (1) visible/hidden test splits that withhold evaluation tests during compilation, (2) semantic mutation testing via a post-compilation agent that generates plausible faulty prompt variants, with the harness measuring whether the test suite detects them, and (3) spec evolution scenarios that quantify regression safety when requirements change. We evaluate TDAD on SpecSuite-Core, a benchmark of four deeply-specified agents spanning policy compliance, grounded analytics, runbook adherence, and deterministic enforcement. Across 24 independent trials, TDAD achieves 92% v1 compilation success with 97% mean hidden pass rate; evolved specifications compile at 58%, with most failed runs passing all visible tests except 1-2, and show 86-100% mutation scores, 78% v2 hidden pass rate, and 97% regression safety scores. The implementation is available as an open benchmark at https://github.com/f-labs-io/tdad-paper-code.
Qualitative research gives rich insights into the quintessentially human aspects of software engineering as a socio-technical system. Qualitative research spans diverse strategies and methods, from interpretivist, in situ observational field studies, to deductive coding of data from mining studies. Advances in large language models and generative AI (GenAI) have prompted claims that artificial intelligence could automate qualitative analysis. Such claims are overgeneralizing from narrow successes. GenAI support must be carefully adapted to the data of interest, but also to the characteristics of a particular research strategy. In this Frontiers of SE paper, we discuss the emerging use of GenAI in relation to the broad spectrum of qualitative research in software engineering. We outline the dimensions of qualitative work in software engineering, review emerging empirical evidence for GenAI assistance, examine the pros and cons of GenAI-mediated qualitative research practices, and revisit qualitative research quality factors, in light of GenAI. Our goal is to inform researchers about the promises and pitfalls of GenAI-assisted qualitative research. We conclude with future plans to advance understanding of its use in software engineering.
System prompts for LLM-based coding agents are software artifacts that govern agent behavior, yet lack the testing infrastructure applied to conventional software. We present Arbiter, a framework combining formal evaluation rules with multi-model LLM scouring to detect interference patterns in system prompts. Applied to three major coding agent system prompts: Claude Code (Anthropic), Codex CLI (OpenAI), and Gemini CLI (Google), we identify 152 findings across the undirected scouring phase and 21 hand-labeled interference patterns in directed analysis of one vendor. We show that prompt architecture (monolithic, flat, modular) strongly correlates with observed failure class but not with severity, and that multi-model evaluation discovers categorically different vulnerability classes than single-model analysis. One scourer finding was structural data loss in Gemini CLI's memory system was consistent with an issue filed and patched by Google, which addressed the symptom without addressing the schema-level root cause identified by the scourer. Total cost of cross-vendor analysis: \$0.27 USD.
LLMs have advanced code generation, but their use for generating microservices with explicit dependencies and API contracts remains understudied. We examine whether AI agents can generate functional microservices and how different forms of contextual information influence their performance. We assess 144 generated microservices across 3 agents, 4 projects, 2 prompting strategies, and 2 scenarios. Incremental generation operates within existing systems and is evaluated with unit tests. Clean state generation starts from requirements alone and is evaluated with integration tests. We analyze functional correctness, code quality, and efficiency. Minimal prompts outperformed detailed ones in incremental generation, with 50-76% unit test pass rates. Clean state generation produced higher integration test pass rates (81-98%), indicating strong API contract adherence. Generated code showed lower complexity than human baselines. Generation times varied widely across agents, averaging 6-16 minutes per service. AI agents can produce microservices with maintainable code, yet inconsistent correctness and reliance on human oversight show that fully autonomous microservice generation is not yet achievable.