Responsible Requirements Engineering in a VUCA World

This is a personal copy of a column in IEEE Software (Sep/Oct 2025). Republished with permission.

Held in sunny Barcelona, REFSQ 2025 brought together researchers and practitioners around the theme of social responsibility in requirements engineering. As diversity, equity, and inclusion (DEI) efforts are currently under threat, this year’s conference theme felt more relevant than ever. This column summarizes a panel discussion on responsibility. Given the mediator role of our field, what other software engineering discipline could be a better fit for responsible engineering? And this will just keep getting more important as the influence of technology on society increases. Happy reading! – Markus Borg

Markus Borg, Martina Beck, Andreas Beck, Simon Jiménez, Yiannis Kanellopoulos, and Birgit Penzenstadler

The world is getting more volatile, uncertain, complex, and ambiguous (VUCA). We face geopolitical instability, full-scale wars, AI disruption, climate change, and regulatory uncertainty. Add to that populistic political micromanagement, and long-term planning becomes harder than ever. Meanwhile, digital transformation, automation, and agile practices push organizations to move faster – often at the expense of structure and predictability.

In this reality, what does responsible requirements engineering (RE) look like? Can RE help bring clarity, or are we rather reacting to change? As technology increasingly shapes society, how do we ensure RE supports all stakeholders?

These questions were explored during the closing panel of the REFSQ25 conference. Martina Beck and Markus Borg moderated a discussion with four experienced panelists.

After opening statements, the discussion unfolded into four themes: volatility, uncertainty and ambiguity, inclusion and social responsibility, and AI disruption, before concluding with reflections on the future of responsible RE.

Dr. Yiannis Kanellopouloscode4thought, GreeceSeasoned entrepreneur and AI expert with over 17 years of experience in evaluating large-scale software systems. Founder of code4thought, one of Greece’s first startups specializing in AI testing, auditing, and IT consulting. A strong advocate for Responsible AI, he’s committed to designing and implementing technology thoughtfully.
Andreas BeckLinde Gas, GermanySenior IT Manager for Digital Design, Innovation and Strategy at Linde Gas in Region Europe West. He has worked with responsible healthcare applications for decades, as a programmer, designer, architect, requirements analyst and project manager.
Simon Jiménezstorywise, AustriaCEO and founder of storywise, offering an AI platform that helps create detailed requirements specifications. With 20 years of experience, he uses his experience to design next-generation AI tooling for requirements engineers.
Dr. Birgit PenzenstadlerChalmers University of Technology and University of Gothenburg, SwedenAssociate Professor. For a decade, she has pioneered research on the relationship between sustainability and software engineering.
Panelists. From left to right in the above photo, Yiannis Kanellopoulos, Andreas Beck, Simon Jiménez, and Birgit Penzenstadler.

Opening Statements

Yiannis: RE must be an integral part of any project, not an afterthought. Requirements engineers should be involved from day one, not just to gather requirements, but to ensure that all relevant stakeholders are identified, engaged, and aligned early. In the era of AI systems, this is even more critical – not only for functional correctness, but to embed values like transparency, fairness, and accountability from the start. It’s not just about capturing what the system should do, but how it should behave in the world. And for whom!

Andreas: I remember working in Detroit for Automotive customers when my American boss gave me the book “BLUR – The Speed of Change in the Connected Economy” [1] to emphasize how fast things were changing. This was in 1999, 26 years ago… So to me, nothing really has changed. We can argue whether the rate of acceleration itself is increasing, but that’s just the third derivative.

In RE, the same old principles still apply. Look beyond the “stated” requirement to uncover the true underlying need. As in the “5 whys” technique from Toyota [2] (the idea is to not stop at the first explanation, but keep asking “why?” until you reach the root cause or true need), responsible RE means understanding the real-world consequences of fulfilling a requirement. We must make the abstract concrete and envision the future.

Simon: I’ll take a technology-oriented stance. When the world is getting more volatile, the requirements must be better. With good, unambiguous requirements, getting the software out could be largely done by AI in 10-20 years. However, getting the requirements right will remain hard work! This activity cannot be done, only helped, by AI.

Birgit: VUCA – the world hasn’t just become like that. We made it so! A rap song by the German act Die Fantastischen Vier includes the line “Du stehst nicht im, du bist der Stau”, which translates to “You didn’t get into a traffic jam, you are the traffic jam.” In the same way, we’ve made the world VUCA with so many of our past accumulated choices. As developers and users, we’ve been chasing speed and capability.

Responsible RE starts by asking, “Should we build this?” Then, as Andreas stressed, “Why should we?” And to connect to Yiannis, include all stakeholders. Including nature at large, which happens to be the topic of my keynote tomorrow. (A recording of the REFSQ25 keynote ”How to accommodate for nature as a stakeholder in IT systems?” is available online)

Volatility

Martina: How can RE remain grounded when the world – and the tech stacks – are shifting faster than ever? Where do you see RE struggling most in a VUCA world?

Simon: While the how – features, UI, tech stack – might change rapidly, the fundamental business need or user problem often remains stable. As we’ve heard, responsible RE prioritizes understanding the “why”, not just the “what.” Volatility demands constant re-evaluation of what matters most right now. Agile models keep shifting priorities and user stories. I advocate traceability. Managing new priorities so that code changes are traceable to the original demands is very helpful.

Yiannis: Agreed. RE can remain grounded in a VUCA world by anchoring itself to the core problem. The human or organizational need that the system is meant to address. Technology will keep evolving, but it’s just an enabler. What matters is understanding why we’re building something, not just how.

The strength of RE lies in problem framing, stakeholder alignment, and surfacing assumptions. These skills are timeless, even as tech stacks change. RE struggles most when it becomes a checklist exercise or is dragged in too late – when decisions are already baked in. In volatile environments, RE’s value lies in asking the hard questions early, embracing iteration, and constantly revalidating what “value” looks like – because that too can shift. Ultimately, even in complex systems, the root challenges are of human nature: goals, conflicts, and trade-offs.

Birgit: Speaking of nature, in accounting for what we have deemed environmental externalities (defined by UNESCWA as “uncompensated environmental effects of production and consumption that affect consumer utility and enterprise cost outside the market mechanism”)  – six out of nine planetary boundaries have been breached [3]. We’ve destabilized our own operating environment for humanity. Largely because of not taking into account – or into accounting! – the environmental impacts of our actions.

To stay grounded, RE must resist clients’ pressure to chase the next trend. Instead, we must keep asking uncomfortable questions. “Why do you want that?” or “Walk me through how that should work.”

Andreas: Whatever it’s waterfall or agile, we’re still trying to understand requirements. Software engineering always involves translating vague natural language into formal language for computers to execute. This demands implicit contextual knowledge. In all volatility, we stay grounded through RE, because we know what value it adds to the process.

Uncertainty & Ambiguity

Markus: Should RE embrace the VUCA world’s uncertainty, or try to tame it? Can our methods support foresight, speculation, or scenario-based planning?

Andreas: Begin by embracing it. Uncertainty gives valuable opportunities to ask seemingly naïve questions, which can help tame it later. Ambiguities often mean that important decisions haven’t been made yet, or that things lack conceptual clarity. Maybe two things are treated as one when they should be separated and named distinctly.

Uncertainty and ambiguity can spark fruitful discussions. On higher levels – like economy, society, and nature – they should be embraced. In RE, we move forward by clearly stating assumptions when knowledge is incomplete. This helps maintain momentum and supports learning through fail-fast thinking.

Simon: Responsible RE should obviously embrace uncertainty rather than pretend it doesn’t exist. Like Andreas said, we must be explicit about what we know, assume, and don’t know. When ambiguity is resolved, we should also document how it was accomplished to help future work.

Scenario-based planning is useful, sure. But only to a point. There’s a cost-benefit trade-off.  Soon simply trying things out becomes more effective than designing for every possible outcome. “Done and 80% perfect” beats “perfect but not ready.”

Birgit: I’d love to see us embrace future thinking methods more. Scenario-based planning with a best-case, worst-case, and world-on-fire scenario could really help us prepare.

Yiannis: I agree, RE should embrace uncertainty. At code4thought, we insist that software quality isn’t an afterthought. A well-designed, thoughtful system is easier to maintain. And far more likely to adapt to fast-changing conditions.

Inclusion and Social Responsibility

Martina: How good are we at the bigger picture? Are we designing for everyone, or just those in the room – or in the dataset, in the case of AI systems?

Andreas: We probably can’t design for all users. Trying to do so may ironically undermine inclusion by reducing people to predefined categories like “those who can’t hear, see, or walk.” That counters the idea of treating each user as an individual. Economic constraints force us to prioritize. Our first focus must be on the users directly affected. For specialized needs, tailored solutions may be more appropriate than including everyone in one system.

Simon: We often cannot afford it, as Andreas said. Even with the new European Accessibility Act, companies often ignore it. We can try to pitch good accessibility as insurance against future claims, but it’s a hard sell. I see a novel use case for Large Language Models (LLM). They can simulate different disabilities, even ones you hadn’t considered. When perspectives are missing in your current world model, LLMs can help you toward completeness. For example, ask it to review your specification from the perspective of someone with a disability. You’ll likely learn something.

Yiannis: Currently, we mostly design for those in the room. And with AI, we design for whoever is represented in the training data. That often means we overlook unrepresented users are still affected. A responsible approach means empathizing with all relevant stakeholders including marginalized groups, and anyone likely to be harmed by a system failure.

Birgit: Yes, despite good intentions, we often stop at designing for those in the room. A typical RE step is creating a persona from a user group unlike ourselves. But that often stays superficial. A simplistic persona can’t capture the real lived experience of someone who is blind, someone who has faced lifelong discrimination, or someone with only intermittent access to electricity.

Disruption by AI and Tech Shifts

Markus: Like so many other fields, software engineering is being disrupted by AI. Is RE keeping up? Is it being enhanced or eroded? And how can RE stay responsible without slowing innovation?

Birgit: We’re seeing more empirical studies on this. Companies also report changes in how requirements are specified. Some write them first and refine with AI. Others generate them directly through sophisticated prompt engineering followed by human refinement. Either way, RE is changing. AI may assist, but it doesn’t eliminate the hard work. Responsibility still falls on us.

Simon: I mostly see the other end of the spectrum. Many RE practitioners still rely on basic tools like Word and Excel, sometimes copy-pasting from AI chat interfaces. What if the LLM had direct access to our tools and data? It could assist in proactively suggesting, exploring, and generating. That’s what we’re building at storywise. We’ve barely scratched the surface of what’s possible.

Yiannis: I agree with Simon. AI can enhance RE, rather than erode it. LLMs can explain and reverse-engineer legacy code, helping uncover past priorities. Such insights can support more sustainable decisions going forward. And help RE stay responsible by ensuring continuity.

Andreas: I’ll offer a more critical view. The real risk is losing human understanding. AI isn’t creative, it needs input from us. If we stop developing our own knowledge, we won’t have anything meaningful to feed it.

Imagine we use AI to write documents and later rely on AI to read and interpret them. If AI agents start talking only to other agents, we lose touch with both the domain and the profession. Picture someone visiting Linde in 10 years and asking, “What does Linde do?” And the answer is, “We don’t really know. The AI agents run everything now.”

The Future of RE

Martina: Before we wrap up, let’s look ahead. Are we evolving RE to meet today’s world, or just patching a 20th-century model? What’s one thing a responsible requirements engineer should do differently starting tomorrow?

Birgit: The RE community is always evolving. Sometimes within our existing skillset, sometimes by adapting as things shift. We should keep doing what we’re good at, also stay flexible, curious, and true to our values. Always do your best.

Simon: RE will transform, starting with how we enter information. AI will help us feed structured data into projects with less manual work. We’ll have more complete but condensed documents, interactively linked to diagrams, regulations, and mockups. Prototypes will get better too. However, we must stay in control. An LLM can’t reason logically across steps. Every step must still be approved by a human.

Andreas: One challenge is that “everyone” now things they can build software. Not everyone performs surgery or designs bridges. But through low-code platforms, we’re promoting citizen developers. This has real-world consequences because software needs to be secure, operable, and sustainable. That requires knowing what should be generic/specific and stable/flexible, and how the real world maps into formal models. That’s RE’s job, together with design. And it’s not just about methods – it’s about genuine curiosity. Stay curious out there!

Yiannis: A responsible requirements engineer must be present from day one. Physically, intellectually, and strategically. In an AI-driven world, RE must evolve beyond documentation. It needs to be embedded, decisions as they happen.

Being present also means playing translator. RE connects data scientists, engineers, and business owners. Responsibility doesn’t come from rigidity, but from shared understanding. We must move from gatekeepers to guides. Someone who ensures that, as the system evolves, its purpose, risks, and responsibilities are always visible and actively managed.

References

  • [1] Stan Davis and Christopher Meyer. Blur: The Speed of Change in the Connected Economy, Grand Central Publishing, New York, NY, 1999.
  • [2] Taiichi Ohno. Toyota Production System: Beyond Large-Scale Production, Productivity Press, Cambridge, MA, 1988.
  • [3] Katherine Richardson et al., Earth Beyond Six of Nine Planetary Boundaries, Science Advances, 9, eadh2458, 2023. DOI:10.1126/sciadv.adh2458