The Inference Report

March 29, 2026

Capital and technical capability have outpaced institutional agreement on control. Anthropic maintains boundaries around its models despite Pentagon pressure because it has consumer revenue to rely on. Hugging Face released OpenClaw to position open tooling as a counterweight to proprietary lock-in. Meanwhile, the infrastructure spreads faster than governance can follow: AI-driven border surveillance rolls out across West Africa with minimal oversight, recruiters work around AI hiring systems because they've lost confidence in outputs, and xAI has hemorrhaged nearly all its co-founders, suggesting no consensus exists even within single companies on what the technology should optimize for.

Research in computer science and society reveals why this fragmentation persists. Studies on AI in education show epistemic authority and accountability cannot be treated as technical problems alone. Work on fairness exposes how aggregate metrics systematically fail to capture real harms, producing false assurance about system safety. Structural asymmetries of power and accountability inhere in system design itself and cannot be remedied by transparency or consent alone. The research reframes AI governance not as a problem of better alignment or clearer rules, but as a problem of institutional and technical architecture: who decides, who bears consequences, and whether those roles can be reconciled within a single interaction.

On the development side, the pattern is clear: teams have stopped asking whether agents work and started asking how to make them reliable and auditable. Trending repositories show the industry moving past hype into operational concerns. AgentScope's explicit pitch around visibility and trust reflects this shift. Developers are solving last-mile problems, adapting general models to specific hardware and tasks without cloud dependencies, and building specification systems that can be versioned and reasoned about rather than relying on prompt engineering. The unglamorous work of making agents useful enough to deploy is underway. What remains unresolved is whether any company, government, or standards body can enforce a coherent answer to what these systems should optimize for when they operate across borders and jurisdictions with zero coordination.

Grant Calloway

AI LabsAll labs
From the WireAll feeds
Research Papers — FocusedAll papers
Generative AI User Experience: Developing Human--AI Epistemic Partnership cs.CY

Generative AI (GenAI) has rapidly entered education, yet its user experience is often explained through adoption-oriented constructs such as usefulness, ease of use, and engagement. We argue that these constructs are no longer sufficient because systems such as ChatGPT do not merely support learning tasks but also participate in knowledge construction. Existing theories cannot explain why GenAI frequently produces experiences characterized by negotiated authority, redistributed cognition, and accountability tension. To address this gap, this paper develops the Human--AI Epistemic Partnership Theory (HAEPT), explaining the GenAI user experience as a form of epistemic partnership that features a dynamic negotiation of three interlocking contracts: epistemic, agency, and accountability. We argue that findings on trust, over-reliance, academic integrity, teacher caution, and relational interaction about GenAI can be reinterpreted as tensions within these contracts rather than as isolated issues. Instead of holding a single, stable view of GenAI, users adjust how they relate to it over time through calibration cycles. These repeated interactions account for why trust and skepticism often coexist and for how partnership modes describe recurrent configurations of human--AI collaboration across tasks. To demonstrate the usefulness of HAEPT, we applied it to analyze the UX of collaborative learning with AI speakers and AI-facilitated scientific argumentation, illustrating different contract configurations.

From Untamed Black Box to Interpretable Pedagogical Orchestration: The Ensemble of Specialized LLMs Architecture for Adaptive Tutoring cs.CY

Monolithic Large Language Models (LLMs) used in educational dialogue often behave as "black boxes," where pedagogical decisions are implicit and difficult to audit, frequently violating instructional constraints by providing answers too early. We introduce the Ensemble of Specialized LLMS (ES-LLMS) architecture that separates decision-making from wording. Pedagogical actions are selected by a deterministic rules-based orchestrator coordinating specialized agents covering tutoring, assessment, feedback, scaffolding, motivation and ethics-guided by an interpretable Bayesian Knowledge Tracing (BKT) student model. An LLM renderer surface-realizes the chosen action in natural language. This design emphasizes reliability and controllability: constraints such as "attempt-before-hint" and hint caps are enforced as explicit rules, and the system logs per-turn agent traces and constraint checks. Validation of pedagogical quality via human expert reviewers (N=6) and a multi-LLM-as-Judge panel (six state-of-the-art models) showed that ES-LLMs were preferred in 91.7% and 79.2% of cases, respectively. The architecture significantly outperformed monolithic baselines across all seven dimensions, particularly in Scaffolding & Guidance, and Trust & Explainability. Furthermore, a Monte Carlo simulation (N=2,400) exposed a "Mastery Gain Paradox," where monolithic tutors inflated short-term performance through over-assistance. In contrast, ES-LLMs achieved 100% adherence to pedagogical constraints (e.g., attempt-before-hint) and a 3.3x increase in hint efficiency. Operationally, ES-LLMs reduced costs by 54% and latency by 22% by utilizing stateless prompts. We conclude that structural decoupling is essential for transforming stochastic models into trustworthy, verifiable and resource-efficient pedagogical agents.

Integrating Mental Health, Well-Being, and Sustainability into Software Engineering Education cs.CY

Mental health and well-being are major concerns in higher education and professional fields such as software engineering, yet are often overlooked in curricula. This paper describes our approach to include mental health, well-being, and sustainability in software engineering education in two ways: (1) well-being-focused software projects that ask students to design technical solutions or research addressing mental health and sustainability or societal challenges, and (2) brief classroom interventions such as short reflective discussions and team-building activities. We argue that this combination can help students see software engineering more broadly while creating healthier learning environments. Our analysis of reflections from 60 students found several positive outcomes: students gained a more human-centred perspective, had more team discussions about mental health, and began to see well-being as inspiration for using software to benefit society and individuals rather than merely as a technical or business tool. By combining technical skills with awareness of well-being, we argue that software engineering education can prepare future developers to be both skilled programmers and responsible professionals who care about human well-being.

The First Generation of AI-Assisted Programming Learners: Gendered Patterns in Critical Thinking and AI Ethics of German Secondary School Students cs.CY

The first generation of students is learning to program alongside GenAI (Generative Artificial Intelligence) tools, raising questions about how young learners critically engage with them and perceive ethical responsibilities. While prior research has focused on university students or developers, little is known about secondary school novices, who represent the next cohort of software engineers. To address this gap, we conducted an exploratory study with 84 German secondary school students aged 16-19 attending software development workshops. We examined their critical thinking practices in AI-assisted programming, perceptions of AI ethics and responsibility, and gender-related differences in their views. Our results reveal an AI paradox: students demonstrate strong ethical reasoning and awareness about AI, yet many report integrating AI-generated code without a thorough understanding of it. The majority of our cohort attributed significant responsibility for AI practices to politics and corporations, potentially reflecting Germany's cultural context, with its strict regulations and data privacy discourse. Boys reported more frequent and experimental use of AI-assisted programming, whereas girls expressed greater scepticism and emphasised peer collaboration over GenAI assistance. Our findings highlight the importance of culturally responsive software engineering education that strengthens critical AI literacy in AI-assisted programming by linking ethics to concrete code artefacts and preparing young learners for this AI-driven software landscape.

Real Talk, Virtual Faces: A Formal Concept Analysis of Personality and Sentiment in Influencer Audiences cs.CY

Virtual influencers~(VIs) -- digitally synthetic social-media personas -- attract audiences whose discourse appears qualitatively different from discourse around human influencers~(HIs). Existing work characterises this difference through surveys or aggregate engagement statistics, which reveal \emph{what} audiences say but not \emph{how} multiple signals co-occur. We propose a two-layer, structure-first framework grounded in Formal Concept Analysis~(FCA) and association rule mining. The first layer applies FCA with support-based iceberg filtering to weekly-aggregated comment data, extracting discourse profiles -- weekly co-occurrence bundles of sentiment, Big Five personality cues, and topic tags. The second layer mines association rules at the comment level, revealing personality--sentiment--topic dependencies invisible to frequency-table analysis. Applied to YouTube comments from three VI--HI influencer pairs, the two-layer analysis reveals a consistent structural divergence: HI discourse concentrates into a single, emotionally regulated (stability-centred) regime (low neuroticism anchoring positivity), while VI discourse supports three structurally distinct discourse modes, including an appearance-discourse cluster absent from HI despite near-equal marginal prevalence. Topic-specific analyses further show that VI contexts exhibit negative sentiment in psychologically sensitive domains (mental health, body image, artificial identity) relative to HI contexts. Our results position FCA as a principled tool for multi-signal discourse analysis and demonstrate that virtuality reshapes not just what audiences say, but the underlying grammar of how signals co-occur in their reactions.

PopResume: Causal Fairness Evaluation of LLM/VLM Resume Screeners with Population-Representative Dataset cs.CY

We present PopResume, a population-representative resume dataset for causal fairness auditing of LLM- and VLM-based resume screening systems. Unlike existing benchmarks that rely on manually injected demographic information and outcome-level disparities, PopResume is grounded in population statistics and preserves natural attribute relationships, enabling path-specific effect (PSE)-based fairness evaluation. We decompose the effect of a protected attribute on resume scores into two paths: the business necessity path, mediated by job-relevant qualifications, and the redlining path, mediated by demographic proxies. This distinction allows auditors to separate legally permissible from impermissible sources of disparity. Evaluating four LLMs and four VLMs on PopResume's 60.8K resumes across five occupations, we identify five representative discrimination patterns that aggregate metrics fail to capture. Our results demonstrate that PSE-based evaluation reveals fairness issues masked by outcome-level measures, underscoring the need for causally-grounded auditing frameworks in AI-assisted hiring.

BenchmarksFull tables
Artificial AnalysisIntelligence Index

Composite score across coding, math, and reasoning

#ModelScoretok/s$/1M
1GPT-5.457.284$5.63
2Gemini 3.1 Pro Preview57.2115$4.50
3GPT-5.3 Codex5481$4.81
4Claude Opus 4.65356$10.00
5Claude Sonnet 4.651.766$6.00
SWE-rebench

Agentic coding on real-world software engineering tasks

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%