Agile is -- Agile isn't
Software development firms that have endeavored to adopt Agile into their culture typically end up with some mixed bag of results. In its purest sense, Agile is a philosophy, a way of looking at a software manufacturing process that prescribes specific ways of approaching work natural to the overall software development life cycle. But we are literal beings, especially in our business-oriented culture, where most of us are expected to adopt what has been rigidly prescribed for us to do and use -- not adapt. So we often find ourselves trying to force the square peg into the round hole, and unless the diameter of that hole is much larger than the square, it doesn't work -- and even when it is large enough, it still doesn't work because the fit isn't secure. Unless the company has sanctioned the business to adapt, we will pile up on ourselves in a worthless heap, like the marching band scene in Animal House
where Stork (Douglas Kenney) leads them blindly into a dead-end alley.
Agile, in its genesis, is about promoting sustainable development practices. Its tenets tend to pursue the benefits of focused collaboration in more tightly framed teams. Individuals respond well to work structures that are recognized by a wider group, where what works is acknowledged and what doesn't is discarded. Most of the tenets in Agile are configurable, so that they can be deployed in different ways by different firms with different customers and product lines. It's more about being concerned with a successful outcome than it is about adhering to some ordained process. In so doing, you get what it is that you were after in the first place: well-functioning and cost-efficiently manufactured software.
Software, even old legacy systems, is constantly being updated and reengineered to behave in profoundly different ways, so that process outcomes evolve as the business evolves. Software therefore is evolutionary in its form and function, not constant. But let's not confuse a flexible development approach that embraces (requirement) changes discovered during the build with the highly structured business process flow that the software is expected to support and process. The two are very different things. Folks in the rigid process-approach camp tend to point to their hardened development framework for highly constrained systems and say that we shouldn't try to reinvent the wheel. Look, everything is in a constant state of change. Why shouldn't we evolve it? Do you think that the first stone-carved wheel is even remotely the same as what we use today on the F/A-18 Super Hornet fighter jet or any other modern, commercial aircraft, ones that typically involve complex trunnion-arranged strut assemblies with up to 6,000 psi in their pneumatic chambers? I sure hope you've reinvented the wheel in the plane I'm flying on today. It's about evolving process in response to an evolving need.
Stupid is as stupid does. Agile doesn't kill projects, people do. Intuitive, creative people succeed when they embrace adaptation in their engineering processes, because these folks know it's about their work ethic -- not (only) their methods. Agile is supposed to be about adaptation. It is not about some whiz-bang, no-design-upfront, compile-requirements-on-the-fly, iterate-and-engineer-and-go approach to software development that might as well use pixie dust to produce the next generation of software engineering protocol. Enterprising people and teams that choose to focus on the needs of their target market first and then scale their manufacturing process to that end are the folks who will advance thought and services leadership. Those will be the ones who actually deliver the something better.
About the financial services industry
After a dodgy start in our storied U.S. history, largely due to agrarian-minded folks (like Andrew Jackson, a central bank foe) who feared the concept of a central bank, it wasn't until we were fully engaged in the Civil War (1863) that the National Banking Act was enacted, paving the way for a uniform currency and nationally chartered banks. The subsequent road toward the model we have today wasn't without some profound turmoil, however. In 1893, the worst depression our young country had ever seen occurred, a panic that was largely stabilized at the time by the interactions of J.P. Morgan.
By 1913, President Woodrow Wilson signed into law the Federal Reserve Act, a legislative compromise between the concept of a decentralized central bank and competing interests of private banks and a populist sentiment. After the market crash and during the subsequent Depression period (1929-1933), nearly 10,000 banks failed. In response, the U.S. enacted the Banking Act of 1933 (aka the Glass-Steagall Act). Since then through 2012, there have been seven other major acts that all work to constrain the financial services framework in this country and the conduct rules that we invariably codify in our software behaviors, outcomes, and work flows.
Since software applications were first created for financial services firms, the functions they endeavor to support are mostly highly controlled ones, largely constrained by prescriptive conduct rules that are both complex and inefficient. As technology evolved and functional outcomes became more logically scalable, banking operations followed suit and soon began to scale their human capital component in ways more deeply aligned with (and dependent upon) the use of software tools. But it was a case of follow-the-(technology)-leader, not the other way around -- and it still is, in many material ways. Today, regulators acknowledge the operational and risk-avoidance savings that software delivers to the industry, but they are keen to resist allowing an environment to be created whereby 1's and 0's provide a means for shadows to control financial market outcomes or for dealers to manipulate investors' assets or results. As one might imagine, technology can be a tough sell to regulators in the face of that ongoing concern.
Interestingly, as highly regulated as the financial services industry has become, not all of its rules are so prescriptive. Many rules are principle based, which allow firms to massage specific operational work flows and even outcomes so that supporting enterprise-level software systems have to be highly parameterized, but in deeply constrained ways. Our question becomes whether Agile can really support this level of functional constraint. In particular, can the requirements artifacts be so thinly derived, strung together in epic constituency groupings, and still sufficiently support highly constrained systems? Not really. Not without some core adjustments to these very literal artifact definitions. What, then, is the answer?
The problem with users
Agile doctrine says much about the brief, thin format of user stories. And this is where we in enterprise solution engineering and services tend to recoil in disbelief. It's simply too far-fetched to believe that a few scant lines in a set of epic compound or complex user story folios fosters reliable requirements discovery and validation. It's simply not realistic. Unfortunately for Agile proponents, every example of user story composition (and process decomposition) that I've seen in the public domain uses nonsensical, too-simplistic examples not applicable to realistic functions in my projects.
In the bank, users are tunnel-visioned folks who wear blinders, particularly in the operations departments of large financial institutions where deep and narrow focus is embodied by users who might have spent their entire careers at a single desk or within a single functional area of the firm. That isn't an insult; it's a necessary level of process depth. But it's hardly a perspective that engenders systematic process fluency across interdependent supervisory functions. Now, merge that dubious reality with the fact that banks and broker-dealers are consumers of financial services transactional and trading software, and they expect their vendors to deliver an integrated suite of processed behaviors that reduce their operational overhead in the human capital equation by increasing functional efficiency but without circumventing regulation. Well, to build that kind of system you need product experts who are truly experts in their domain, and in so doing they typically act as the proxy for the user, spending innumerable hours explaining the rationale behind a set of specific industry-facing functions to technicians who can't possibly know that and their technical skill set at the same time -- and be equally proficient at both. We require exceptional proficiency on both ends of the engineering spectrum – in the requirements discovery phase and in the engineering cycle itself.
There is another flaw in the assumption that, by themselves, user stories can provide a reliable instructional library of process that the software is expected to produce -- that users actually know what they want. They usually know what they do, and they usually know what they can never do -- but know what they want
? These are not efficiency experts. Users are not folks who have distilled operational efficiency in their process, because that's not their charter -- not in a bank, anyway. Historically, they come from an environment that threw armies of staff at vast paper-facing work flows, where the repeatable process was the objective and any change to that process for the sake of efficiency had to be debated at executive levels before it might be sanctioned and then deployed in the bull pen or on the trading floor. These are people who still trade securities in the pits of exchanges by means of open outcry, because that is still the fairest method in the world for a security to find its feet in terms of its price as determined by the market itself. And that's because we haven't found a way for a computer to assume the role of the market maker without being exposed to electronic manipulation or process by shadows in the machine.
Banks are slow to change for all of these reasons. But they love their software. They clamor for highly resilient and mission-critical systematics that scale to their organizational apparatus and comply with regulation. It cannot glitch, it cannot fail, and it must batch efficiently, not because we need an evening batch cycle so much anymore but because night follows day and because global markets open and close in tiered time zones and today becomes tomorrow. For financial obligation-tracking purposes, these systems have to know when a day is through and the next session has begun.
Users are folks who might be scattered geographically across the bank's capital markets business, and each is focused on his or her particular part of the scheme, where one department or desk produces an exception and another manages and works to resolve it. They are not nearly as interconnected as we might think they are. But it's the vendor producing the compliant (software) outcome that is most responsible for putting in place a team capable enough to see the business as a dynamic ecosystem of process and behavioral outcomes.
Chasing the written requirement
In the 1973 movie The Paper Chase
, a story about a young Harvard law student studying contract law while courting the daughter of his professor, the methods used to elicit discovery and truth in process (and reasoning) by that professor are referred to as the Socratic method. It's often the way requirements are explored (and ultimately validated) in the financial services industry space, where seemingly conflicting conduct rules (prescriptive versus principle-based) work to muddy the logic a software manufacturer might be trying to navigate because their engineered process has to codify these behaviors before a user might ever actually encounter them. This method generally works to eliminate contradictions by employing a dialectic approach -- one where the purpose is to resolve a disagreement through rational discourse and by proving that one or another's position constitutes a contradiction. In our industry, software requirements are often distilled through the decomposition of arguments that relate financial instrument exposure concepts to just and equitable practices of trade and, further, to an extended concept of financial prudence and conservatorship. And so it goes that we are more concerned with behavior-driven development (BDD) that tends to focus on context and then on outcome. As pointed out by Omar Al Zabir in his article
on user stories and user needs, the cause-and-effect model is more precise and accurate than presenting the software's prescribed outcome by some definition of a user and their subjective opinion on what might
need to happen. Al Zabir also asserts that if you can get product managers to be even 10 percent more precise in defining requirements (before development begins or before it's even estimated in terms of effort), then you can save at least 30 percent of the total waste cost across the SDLC.
Unfortunately, user stories only document a high-level intention -- especially an algorithmic or mathematically engineered one -- not a prescribed and required behavior. Moreover, in the financial services industry there is a particularly poignant requirement imposed on member firms that they sufficiently document their software iterations and functionalities so that examining authorities are able to validate compliance with particularly complex securities treatment and financial management control functions. As one might expect, these are typically multilayered, and it's easy to play hide-the-ball with examiners if explicit process instructions are missing or are too vague to support forensic accounting, which invariably begins by starting with the end result and working backward.
Users who engage complex financial process solutions in their day-to-day roles typically use these tools to perform a kind of accounting function where their objective is to validate the work flow results the software has performed (on their behalf), and the other part is to isolate exceptions that require additional processing and decision making. There is still a great amount of human decision-making in the bank today that isn't performed directly in a software program. Software savings are still primarily valued by their ability to quickly slog through in seconds what would take an army of humans to do in days or even weeks, especially from the perspective of volume. Other application functions work to produce and engage work flow between decision points that the software could make by itself but can't, because a human is required to intervene and inject their personally synthesized decision into the process. Uninformed technicians might believe, honorably so, that a given efficiency could be inserted in the process at a given point, but the product expert explains that no, it must be this way because a human supervisory function exists in this spot that cannot be circumvented by a machine -- regulators tend to refer to this as possession and control
The written requirement or process instruction therefore is typically the most important part of the build, and it's done in the beginning because that's where most of the effort is defined in terms of the application's behavioral form and function. It doesn't matter if it's an entirely new feature or an enhancement to an existing and already complex process -- it still requires careful and explicit process distillation that is simply too complex to be represented in a user story or to be batted around verbally by a team in a spike or a stand-up.
Writing good requirements in Agile or Waterfall or whatever you might want to say you're using is the most critical task in any build for highly constrained systems, because it's what starts the process and it's what memorializes the effort for years to come. We do look backward at work after it's completed and require significant context to be able to understand why we did what we may have done at the time. Written requirements are also something shared by different stakeholders across the same team. A team can't possibly endeavor to paint a wall blue if they don't understand what flavor of blue is required, or that blue is the fundamental color expected in the job. How exactly do we communicate that? By showing all of them the same shade of blue. In more complex builds, the team must view the same requirements and must engage the author in a mutually progressive exchange to digest them and to understand their behavioral context. When the benefactors of the requirements agree that certain clarifications need to be made, then editing occurs and the written word is clarified accordingly. The result we expect is that the team will be able to move forward with confidence and real momentum.
Our world today is complex. Depending on whom you ask, we have accumulated nearly 200,000 years of storied history, most of which isn't written down. Despite the fact that we consider ourselves "wise humans" (the definition of the Latin homo sapiens
), with highly developed brains, a bipedal gait, and opposable thumbs, why then do we have so much trouble creating a process for writing requirements?
It's probably because we tend to confuse the complexity of a thing with the kind of tool we think we need to do the task. Another reason is that quite often we tend to try to simplify that which is inherently complex -- and good writing is an inherently complex task. Unfortunately, there is no easy method or template we can use to produce a superior outcome. It takes practice and talent and more practice. But let's not be too simplistic.
Realistically, we often need more than one tool because the job is such that it progresses into different stages of completeness for which different tools are required to address the subtasks that exist only within those stages. From that perspective, let us consider requirements gathering to be a function with variables, as opposed to a mere task. Using the function concept, we encounter variables that work to change the outcome of the function -- a natural extension of the reality of gathering requirements for many different kinds of applications and complexities of builds. But the approach or the formula stays largely the same. The build might be greenfield and everything about it may be completely new and may need therefore a much larger set of conditional definitions and process qualifications to give the actual process instructions the context developers require to engineer the expected outcome. On the other hand, brownfield projects typically involve bolting new processes onto existing functions and retrofitting existing behaviors to new design changes. Sometimes that requires equally extensive documentation; it depends on the context and nature of the work.
We've found that all bodies of work in highly constrained systems, regardless of their "field" distinction, require two or three statements that need to be crafted and placed at the beginning of any requirements document. These contextual statements work to capture and memorialize the context of the work for today's team and tomorrow's auditors, or team members on related projects in subsequent years who might be attempting to educate themselves what was done in the past and why.
The business case provides the commercial reasoning driving the build, and, typically, the value to the consumer of the software that the new functionality is intended to serve.
Business requirement context
The business requirement context statement isn't always necessary but adds value when the build involves or engages a complex industry process that isn't easily understood by outsiders or even business people without that specific industry experience. It tends to backfill the business reasoning behind the work to provide additional visibility into the way logic should be engineered to behave.
The development scope is statement is written by the business side that works to describe where the work will be performed. This is particularly true in brownfield projects in order to give developers a sense of peripheral vision into the level of effort (and skill set) required from the technical staff to be resourced to it. It is not a technical statement written by technicians that describes how something is to be done. It also tends to reference necessary technical ancillary documentation (file layout specs) that is often associated to the project, especially when data is being transmitted between one financial institution to another or to a clearing entity or regulator.
After these statements are crafted, the stage becomes set for more specific requirements gathering. Enter stage left, the user story. Long before we called them user stories, we used a writing approach where short, brief statements of functional behavior were created and then categorized into relevant functional groups and then further aligned in a fairly synchronous fashion -- in a business functional sense. We then work to discover where we might have gaps in that content, naturally, in the sense of what it is we're attempting to build or enhance.
From there, we typically find that it's easy to see where (in the development landscape) we have work to do. Some of that work might involve the creation of a new screen or two, and when that's the case we typically try to separate the UI work and its user-facing flow and presentation layer from the underlying calculator work that surfaces synthesized content into various fields and widgets in the UI layer. Other work might exist in deeper components and controls, involving pure calculator effort if that's the nature of the requirement. Or we might find that we have work to do in the event processing controls (work flow) that can be very complex on multiple levels. Regardless of where the work lies, the important thing is that we are able to see where in the application the work lives, and thusly what technical skill sets might be required (in the resources) to perform the work.
But it's inaccurate to label all of our short, functional synopsis statements as user stories, because often in vastly constrained systems they are not user functions per se. They are component cause-and-effect behaviors, driven from specific circumstantial conditions (typical in financial risk management software) that drive a computational or work flow outcome. Consider a real-world example of a brokerage account that maintains a diverse set of financial instruments (stocks, bonds, options that might be held long or short) where the math works to compute some figure of equity and then some of figure of excess or deficit equity. Invariably, each of these instruments behaves in very different ways in response to the behavior of the markets (prices, volatility, etc.). The objective might be to illuminate various kinds of risk (market, credit, securities, liquidity, counterparty, etc.) and to continue to measure those risks as the investor continues to open and close new positions that might worsen or abate the risks(s). Constrained systems design: mixing highly complex and sometimes vaguely defined procedural rules into a fluid (software) service outcome that makes the bank's operations more efficient and less exposed.
I tend to term "user story" as "behavior story," especially in areas of an application that do not exclusively involve UI presentation layer
iterations. To be clear however, we typically choose those words (presentation layer) carefully, because we realize that most end-user interactions with a software program involve UI level iterations. But in our capital markets' constrained systems there is often an equally heavy computational layer
and work flow or event-processing layer
that uses rules engines and other similar technologies to help drive their performance. These stories are what drive the evolution of the well-written process instruction. And their up-front grouping and classification is what helps to separate parts of the software behavior into what novelists refer to as galleries
. Galleries are separate parts of a story akin to an act in a play. They may later be moved around, placed before or after another gallery to attain a better sequential nature to the overall story, despite the fact that actual engineering tasks aren't typically embraced in any such business-facing order. Not usually. But it makes for more efficient reading and comprehension, in the same way that we might describe a person in China and the U.S. as both standing upright on the ground. The mental picture we see (and present in any description) is both of them side by side, with their feet on the ground and the sky above them. But in reality, as they stand on opposite sides of the planet, at the same time they are positioned in this dimension feet to feet, due to the gravitational effects of standing in opposite points on a large sphere. We tend to ignore these things because in isolation they aren't material to the problem solving. But beware -- in software engineering, sometimes they are!
We then do what Agile suggests we do, and that's to focus. We introduce these sometimes voluminous requirements documents to technical staff and walk through the content like actors do when they first get handed their parts in a play. They gather and read and interact and discuss together to understand where the parts are seamless and where parts might grind and chafe the results into something less than desirable. Then we edit, we clarify, we re-version it, and we continue that process until we have a largely actionable set of process instructions from which we will develop, test, and eventually compile user documentation. But there's more. There's the realization that we also need to protect our intellectual property when we create user guides. User guides are the closest thing to functional requirements, and unless you're careful, your competitors can pretty easily obtain the keys to your competitive kingdom and make it not so -- well, competitive.
As a global provider of highly constrained enterprise-level financial services software, with customers doing business in every capital market in the world, we know that one-size-fits-all is nonsense. We know that the outcome is what matters most. In building highly complex systems, we prefer to use more words sometimes to describe a more complex thing, and we value well-written requirements and process flows that bring form and context to the fractured and the abstract. We value results over process. We strive to increase engineering success by sanctioning increased team focus and accountability. We know that change is the reality delta between yesterday and today, and our customers expect us to embrace change to safely transition them into tomorrow. Do what works.
Contributing editor to this article: Jim Hannon