Seven benefits of framing activities as experiments

Beakers

BeakersImagine that you are considering turning your passion for digital photography into a part-time business, or that you have several ideas you think will make your monthly management meetings more effective.  How do you go about progressing these aspirations and ideas? You could go down the ‘analyze-plan-implement’ path and have one major shot at making them successful.  Or you could treat them as an experiment towards your objective of generating a second source of income, in the first instance, or of making your management team more effective, in the second.

Framing these activities as experiments, rather than as firm plans, has several important practical and psychological benefits:

  1. The fear of failure is removed, or at least diminished.  Experiments are not expected to always be successful.  If they fail, there is little or no risk to our reputation and credibility.  As discussed by John Caddell at 99U, in some instances it may even be appropriate to undertake an experiment in which failure is expected.
  2. As a consequence of the reduced fear of failure, you are more likely to be more creative and innovative.  You are more likely to be open to trying some things that may fail because they are new or unusual, but have the potential to provide a high return. Peter Sims over at the HBR Network Blog describes fear of failure as “the No. 1 enemy of creativity”.
  3. There is an increased focus on evaluation and learning rather than on achieving a predetermined implementation plan or being successful (at all costs). By framing the activity as an experiment, we acknowledge that we cannot fully predict the outcome and all the associated implications.  As a result we undertake the task with an inquiry-oriented and learning mindset.
  4. The learning mindset stimulates a focus on the key assumptions we are making and/or on the most important design decisions.  We ask ourselves: what areas of uncertainty are going have the most impact on the success of this idea or proposal?  We then ensure that the experiment and the resultant observations assess and evaluate those areas of uncertainty.
  5. We are likely to take a more objective perspective of the outcomes.  Because we are not protecting our reputation and because we have more than one shot at success, we are less likely to delude ourselves, and misreport to others, that the activity has been successful.  This is a particularly important benefit in those organizations where there is strong pressure to report only good news and positive outcomes and where there is a blame culture.
  6. Experiments can usually be achieved faster and with less resources than a complete implementation.  Experiments do not have to use ‘polished’ products or services, nor do they need to incorporate features unrelated to the assumptions or design decisions being tested.
  7. Design modifications are expected, not resisted.  Because experiments are designed to test and evaluate uncertainties, it is expected that the design will be changed as a result of the learning generated.  Change will be expected, if not encouraged, and the designers will more open to the perspectives and suggestions of others.  The discussion and debates will be about what to change, not whether to change.  Further, if the experiment included comprehensive data capture and interpretation, the discussion and debates will be well informed.

Experiments change as we progress through Adaptive Iteration

Observe>Interpret>Design>EXPERIMENT>One of the key characteristics of situations that are uncertain, unpredictable and emergent is that we cannot rely on knowledge, expertise and experience to deduce what is happening and how to change the situation to achieve our objectives. The interactions are usually too numerous, complex and/or too subject to human choice or unforeseen events to be able to determine in advance the nature and sequence of all the cause and effect relationships.  In these situations we need to rely on observing what is happening to determine ‘what works’ and to discern the broad-based influences – we need to let “the situation talk back”.

Observing natural behaviour will give us some insights about the situation and how we may be able to influence it to achieve our objectives (create organizational change, launch a successful new product, improve the performance of a team, etc), but often we also need to perturb or probe the situation to test our ideas and insights and to generate further insights.  This is the ‘experiment’ phase of Adaptive Iteration.

An interesting way to look at experiments is by analogy with natural selection in the process of evolution.  By conducting experiments and then retaining what works in the next round of design improvements, we are imitating the evolutionary process in which the naturally occurring variants that survive selection pressures become the basis for further variants, and so on.  In Adaptive Iteration we let the situation give us feedback about what is working and was isn’t, and we then incorporate this learning into the next design variant.  Using this analogy, it often may be appropriate to conduct a number of experiments in parallel to maximize the feedback we get.  Note that these experiments are not like scientific experiments that attempt to determine and quantify governing relationships.  In scientific experiments we need to be careful that we don’t change more than one variable at a time.  In Adaptive Iteration this is much less of an issue, we are looking for the best design among a landscape of options, not the formula for the one right design.

What sort of experiments can we do?  Initial small, low risk experiments include:

  • Thought experiments (see earlier blog post)
  • Sketches
  • Story boards
  • Prototypes (mock-up style)
  • Probes or ‘sighter’ trials

The objective of these initial experiments is to quickly canvas a broad range of design options and to learn more about the design context.  The focus is on establishing and testing the broad design concept and structure.  The objective is to develop one or more outline designs that are likely to satisfy the design objectives, boundaries and constraints.  This also helps test early in the design process whether we need to question and review those objectives, boundaries and constraints.  It is much more efficient to learn at this early stage that the stated design objective unnecessarily constrains the design.

A key aspect of these early low risk experiments is to make the design ideas as tangible as possible – to give a clear voice to the creative and conceptual thoughts of those involved in the design process.  Forcing the early design ideas to become tangible has two key benefits.  First, as noted above, it tests the whether the high level design criteria are appropriate and whether those criteria are clearly and commonly understood by all those involved.  Second, it creates a common and tangible focus for evaluation and improvement of the design.  It is much easier to identify and explain possible improvements if you are working with a representation of the design that is tangible and contains a practical level of detail than if you are working with a broad description.  (Although thought experiments are not tangible, they should include sufficient descriptive detail to enable those involved to become immersed in the imagined situation.  They must also be structured as a forward looking exploratory ‘experiment’, not as a rationalization or explanation of a desired outcome.)

One the characteristics, and risks, of these early adaptive iterations is that all four phases – ‘design’, ‘experiment’, ‘observe’ and ‘interpret’ – are often tightly coupled and performed by the one individual or group.  This enables rapid early iterations and the early testing of a wide range of design options.  However, it also runs the risk of ‘group think’ leading to premature narrowing of the design options.  It is important that these early experiments are observed and interpreted with open minds and from multiple perspectives (see earlier blog post).

As Adaptive Iteration moves into detailed design, the experiments use more detailed and realistic representations of the design.  The design is increasingly tested in the context in which it will used.  The experimental options include:

  • Functional prototypes
  • Low risk trials
  • Simulations
  • Pilot implementation

These increasingly realistic experiments need to be structured to evaluate how the design performs in context.  They need to reveal any unintended consequences, any unanticipated constraints, any emerging systemic behaviour (especially non-linear behaviour) and any lessons for wider implementation.  At this stage it is also important to be open to any surprises – positive or negative.  In summary, these experiments need to be sufficiently realistic and broad to assess what works and what does not.  They also provide another opportunity to test the foundational design choices (objective, boundaries/ constraints, architecture and metaphor).  The results of these more realistic experiments may indicate that some or all of the foundational design choices need to be fine tuned, or even substantially revised.

The final form of ‘experiment’ is the real implementation of the design.  Although, it is not structured as an experiment, it is, by definition, the most realistic opportunity to observe what works and what does not.  This is especially important for the situations of interest to us, those that are uncertain, unpredictable and emergent.  In these situations, the introduction of the design is likely to have a ripple effect on the context (resistance to organizational change might increase – or decrease, competitors may bring forward the timing of a new product, reinforcing complementary initiatives may be stimulated in the industry, etc) .  Consequently, it is important to continue the Adaptive Iteration approach throughout the life of the design, but especially during the early stages of introduction.  So the final experimental option is:

  • Evolving implementation.

Use ‘thought experiments’ to reduce emergent risk

Observe-Interpret-Design-Experiment

Observe-Interpret-Design-ExperimentImagine that you are considering an initiative of some sort.  It could be that you are a manager and you need to respond to signs of growing tension between two of your staff.  Or perhaps you need to develop an approach to social media for your organization or business unit.  Both situations are emergent.  The details of how they will play out cannot be predicted with confidence.  Also, your actions will create responses that will further influence the dynamics of the situation and could generate unforeseen or unintended consequences.  Therefore, you need to be prepared and able to adaptively iterate your response as the situation emerges.

The potential for downside risk in such situations can be mitigated by initially conducting a series of thought experiments.  As with any real experiment these need to be planned and ‘observed’.  The thought experiment must have an intent – either to test a design hypothesis or to generate insights on which to develop a future design hypothesis.

Thought experiments rely on your ability to mentally immerse yourself in the situation.  In the case of tension between two of your staff, your design hypothesis might be that you need to get both of them together around a table and confront the issues that are creating the tension.  In your thought experiment you would play that meeting out in your mind in sufficient detail to pick up any aspect that is likely to influence the conduct and outcome of the meeting.  Your thought experiment may show that the tone of the meeting will be influenced by the expectations and frame of mind of the participants when they enter the meeting.  This will highlight that the way you invite both to the meeting will be important.  You will then need to revisit your design hypothesis to include a design for the initial setting up of the meeting.

You are now ready to rerun your thought experiment further into the meeting to see how it unfolds. You may get to a point where you simply to do not have a clear enough understanding of the context for the tension between the staff to be reasonably confident about how the meeting will progress.  Because this is just a thought experiment, you may decide to make an assumption about the nature of the context and continue your experiment. If you do, it is important to be aware that the rest of the thought experiment is based on an assumption, not a fact.  You may want to test several assumptions at this point to experimentally see how they affect the conduct and outcome of the meeting.  Once you do this, you might find that, with your current level of context knowledge, the meeting is too risky at this time.  You may then revise your design hypothesis to include some real information gathering before a joint meeting with both of your staff.  Your previous thought experiment about the conduct of the meeting will be valuable in directing the nature and focus of the information gathering.  Once you have gathered the information, you will probably rerun your thought experiment.

The social media example is more complex and strategic but still requires the same approach to a thought experiment: you need an initial design hypothesis; you need to be able to immerse yourself in the context in sufficient detail to recognise critical issues and decision points; you need to be aware of when you are making assumptions, and you need to be prepared to iterate the process and modify your design hypothesis on the basis of your learning.

I will flesh out the social media example in a future blog post.