Thursday 29 October 2015

The Laws of Sport and Automation

I have had this idea in my mind and on my backlog for quite a while.  It was only after speaking at MEWT in Nottingham that I felt I really should get around to writing it. 

There are many debates in the software development world about ‘test automation’ and how we can ‘automate all of the testing’.  I am in the context of this article ignoring the difference between testing and checking, more details of this discussion can be found here - Testing and Checking Refined . However some of my ideas and concepts will touch on the difference between checking and testing. 

Many have put forward arguments about automating what we know and if we have defined requirements up front then it should be possible to automate these.  My counter to this is that in many sports there are well defined upfront requirements (laws) of how the game should be regulated.  For example the laws of football (Soccer to those outside of Europe) can be found online here: FIFA Laws of the Game.  If this is the case and these requirements are defined upfront then why do we not have automated referees? I asked this question on twitter and some of the responses gave reasons due to the psychical limitations, such as battery power unable to run and so forth.  My line of thought is on how these requirements can, and are, interpreted. 

Looking deeper in to the football laws of the game it can be seen there are many ambiguous statements which given the current state of AI, at the time of the publication of this article, I feel are impossible to automate. For example on page 39 it states the following as a reason for a player to be cautioned.
“unsporting behavior”
What does this mean?  Page 125 attempts to define this with a list of what constitutes unsporting behavior.  One of this in particular I found interesting, it is one based on human nature of trying to con or cheat
“attempts to deceive the referee by feigning injury or pretending to have been fouled (simulation)”
This I feel would be a common sense decision made by the referee. How could an automated system know if is fake or not?  Then again how would the ref know?  It it is a common sense decision, being made depending on a multitude of factors and contexts.

How about this one? 
“acts in a manner which shows a lack of respect for the game”
What would count as lack of respect?  A player who in the last second of the game lets in a goal that allows the opposition to win the title. The player shows human emotion and frustration, there is a fine line between emotion and respect or the lack of it?  

My issue with this automation debate is that at this time it is not possible to automate common sense and multiple contexts in the decision making process that a referee has to go though in their thinking process. 

For example a team is winning 20 – 0 a machine would continue to officiate the game in accordance to the strict letter of the law.  Whereas a human referee would allow some flexibility in the interpretation of the rules.   They will allow some aspects of empathy to be applied to the game.  Is it yet possible to automate empathy? 

James Christie made a valid point on twitter that the reason in the majority of sports they are called laws and not rules is that:
“rules are detailed and specific whilst laws can be based on vague principles, which require interpretation and judgment. “
This makes sense since most countries have courts where lawyers debate how the laws of the land can or should be interpreted.  Then a jury, judge or set of judges make a decision based upon the arguments presented. Another case of were the requirements are listed and known but given current AI limitations would be impossible to fully automate. Even though we know that human beings are flawed in the judgement that are made, would using an automated judgement machine be any less flawed, if at all possible to produce?

Returning back to the laws of sport and how ambiguous those laws are we can look at the laws of Rugby Union

Looking at the beginning of the laws on page 21 there is guidance on how the laws should be applied:
The Laws must be applied in such a way as to ensure that the Game is played according to the principles of play. The referee and touch judges can achieve this through fairness, consistency, sensitivity and, at the highest levels, management.”
How would you automate sensitivity in this context?

According to the Oxford English Dictionary this in this context is defined as:
“A persons feelings which might be easily offended or hurt”
Add into that equation “fairness”, we are now journeying down the automation rabbit hole.
Looking at the laws regarding fair play and the guidance that the document provides for foul play (Law 10) section m gives the following guidance.
“Acts contrary to good sportsmanship. A player must not do anything that is against the spirit of good sportsmanship in the playing enclosure”
What constitutes “the sprint of good sportsmanship”? How do you clarify between intentional and unintentional behavior?   Again I am uncertain if this kind of decision could be automated.

If we look at the laws of Rugby League we can see similar issues in how difficult it can be for the laws to be interpreted. Rugby league was one of the early adopters of video technology to help assist the referee in the game.  This is what Michael and James in their article would define as tool assisted testing.  In this case a video referee can review certain decisions via the use of video technology. 

Looking at the definition of a forward pass.
“is a throw towards the opponents’ dead ball line”
How do you define this in the context of a fast moving game?  Under the section 10 which offers some guidance there is a distinction between deliberate and accidental forward passes.  How do you make a distinction between these two actions? Also would an automated system be able to deal with factors such as the momentum of the player and the wind moving the ball.  Yes they could process information quicker than a human could but would it be right?

This is not to say that referees are not fallible and there are many instances in sport of them making mistakes; however people are aware of this and can accept that fact.  Would people be so willing to accept a machine making similar mistakes based upon our biases that machine are not fallible?

Many sports are implementing some level of automated systems which are used to aid the referees.  


It is interesting to note that each of these automated systems have had some controversy regarding their accuracy and success especially with the cricket system.


To conclude when people discuss test automation and attempt to automate as much as possible there is a need to step back and think critically. Automation in software development has a place and is a useful tool to use, however, it should not be thought of as an alternative to testing as applied by a human being.  Even when you think you have the requirements nailed down they are words and as such are open to a multitude of interpretations.  Using a mixture of automation, tool assisted testing and human testing in a ratio that adds value to the quality of the product being delivered is a more thoughtful approach rather than the mantra of we can “automate all the testing effort.” Going forward we need to be thoughtful of what machines can do and what they cannot do.  This may change as technology progresses but as of the publication of this article there are big limitations in automation.

Monday 26 October 2015

Testing Skills #7 - Reflection

What type of learning do you need to engage in today?

Do you need to learn facts?

Or

Do you need to analyse and critically think about what you need to learn or have learned?
Neither of these ways of learning are wrong and each has its place in your learning journey.  Knowing which to use and when is a great tool for a tester to have.

Learning facts is defined as a 'surface approach ' style of learning and writing and thinking critically about what your have learned can be defined as a 'deep approach'.

Reflection is important in software testing and you need to be able to communicate what you have found and uncovered when testing.  If you have no opportunities,  in your organization, to be able to communicate with others then you may find you have a lot on tacit knowledge but little in the way of explicit.  It is vital to set aside time for members of the team to discuss what new information they have uncovered or learned when testing.  This allows them to use reflective reasoning to turn tacit into explicit and gives the person providing the information a sense of purpose and achievement in what they have learned.

To improve your own self-reflection produce a reflective action plan. By doing this you are committing to your self what you intend to do and engage critical thinking skills in doing so. Make sure your plan has goals and deadlines that you can commit to.  Make this deadlines and goals public it encourages you to keep to them.  Using a personal Kanban board can help you achieve this.

"First, the deep and surface approaches are not personality traits or fixed learning styles.  Students adopt an approach which is related to their perceptions of the task."

When you need to think deeply about what you have been studying then writing down you own thoughts and understanding is a good way for you to be able to see what you have remembered.  If you only need to learn information or facts then other models would be more suited.

Writing down you thoughts on what you have been attempting to learn enables you to better remember or apply what has been studied.  Reflection is about doing your own internal self-assessment  of what you have been learning.  This assessment enables you make what you know internally, tacit knowledge, and attempting to make it clear and detailed explicit knowledge.  To do this you need opportunities to talk to others verbally or by writing down what you have studied to invoke critical thinking.

"In general, writing appears to be suitable for tasks where the aim is fostering understanding, changing students' conceptions and developing their thinking skills, but less suitable if the goal is the simple accumulation of factual information"

One way in which can improve your learning is to use reflection:

"Reflection is an active process whereby the professional can gain an understanding of how historical, social, cultural and personal experiences have contributed to professional knowledge and practice "(Wilkinson, 1996).  
You learn from your experiences and to make this happen you need to engage in reflection.  If you think about what you are doing or what you have experienced you are engaging in self-reflection. This approach helps you to turn learning into knowledge and it is vital in improving your own knowledge and skills.  Reflection is really about looking back at a situation, thinking about it, critically and learning from the experience, then using that new knowledge to help in future similar situations.  This in many ways is similar to the approach you will taking when engaging in testing activities.

"The process of reflective writing leads to more than just a gain in your knowledge; it should also challenge the concepts and theories by which you make sense of knowledge. When you reflect on a situation, you do not simply see more, you see differently. This different way of viewing a situation is reflected in statements about a commitment to action. Action is the final stage of reflection. "

Monday 19 October 2015

MEWT4 Post #2 - A coaching model for model recognition - Ash Winter

Abstract:

As part of my role, I often coach testers through the early part of their career. In this context I have noted a pattern in the application and interpretation of models. They are generated internally through various stimuli (learning, influence of others, organizational culture) and then applied subconsciously for the most part, until there is sufficient external scrutiny to recognize them. To this end, I have created a model of questions to help testers to elevate their internal models to a conscious level and begin to articulate them.

To this end I hope to articulate at MEWT:

  • Presentation of the model of questions to determine internal models in use, without introducing models explicitly.
  • Use of Blooms Taxonomy to visualize a coachees modelling paradigm and the steps towards modelling consciously.
  • Practical examples of using this model to assist early career consulting testers to cope with new client information saturation.
Slides for the talk by Ash can be download here - https://mewtblog.files.wordpress.com/2015/10/coaching-model-for-unrecognised-internal-models.pptx

____________________

The first speaker at Mewt was Ash Winter who talked about his experience of coaching and how coaches have their own internal models which still could be wrong.  Ash talked about the issue he and other coaches have experienced with using models and the risk that they can limit your thinking.  He had noticed that some coaches talk about models without really recognizing that they are using a particular model.  This appears to be especially true in the testing domain.

Ash presented a different coaching model based on Blooms taxonom  to provide a framework of asking questions of those you are coaching rather than providing answers.  Ash stated that we should, as coaches, “Build your model on pillars of questions, not answers.  You are coaching”

The levels of Blooms taxonomy can be seen here:



An in-depth look at Blooms taxonomy can be found here.


Ash displayed a different variant of this during his talk:



Ash stated that he felt that Blooms was good for learning and it was useful for coaching as well.  Since Blooms works on the basis that you work towards goals this also then applies to those who coach and utilize coaching models.   

Ash also stated that his model for coaches is for those who are experienced as coaches and who are involved in coaching those who are early in their career as a tester.  As with any other model Ash did point out that he felt this was a new coaching model which was still evolving and emergent and wanted input for the wider community?

During the discussion after Ash has spoken I highlighted that the Blooms taxonomy approach does have some flaws especially in a digital driven learning environment in which we are now situated. 
The hierarchical approach of Blooms does not encourage deep and meaningful learning aided by digital media.

The problem with taxonomies is their attempt to pin down the complexity of cognition in a list of simple categories. In practice, learning doesn’t fall into these neat divisions. It’s a much more complex and messier set of cognitive processes.http://donaldclarkplanb.blogspot.com/2006/09/bloom-goes-boom.html
Issues with Bloom taxonomy further reading:

There are alternative learning models which appear to overcome these flaws in Blooms and maybe mixing them together will provide a more robust model for Ash to work with.

For example:
“Heutagogy is the study of self-determined learning … It is also an attempt to challenge some ideas about teaching and learning that still prevail in teacher centred learning and the need for, as Bill Ford (1997) eloquently puts it ‘knowledge sharing’ rather than ‘knowledge hoarding’. In this respect heutagogy looks to the future in which knowing how to learn will be a fundamental skill given the pace of innovation and the changing structure of communities and workplaces.” https://heutagogycop.wordpress.com/history-of-heutagogy/
Or
“Connectivism is driven by the understanding that decisions are based on rapidly altering foundations. New information is continually being acquired. The ability to draw distinctions between important and unimportant information is vital. The ability to recognize when new information alters the landscape based on decisions made yesterday is also critical.” http://www.itdl.org/journal/jan_05/article01.htm
At the end of the talk by Ash the group felt they needed to go away and think more about the ideas Ash has discussed.

To finish I will leave you with a quote from Ash during the talk:

A lot of people do not know what models are sometimes they emerge during applied practice








Testing Skills #6 - Speaking the language of business

The following short article is based on a talk given by Keith Klain at both CAST and Testbash




As testers we find it difficult to explain our value when we get given opportunities to talk to senior executives in companies.  We normally end up talking about the technicalities of testing or even worse talking down to them as if they do not understand how really important testing is.  The most important rule when talking to business people about testing is…

“Do not talk to them about testing”

You should instead try to make them feel comfortable in the knowledge that you as the person assigned the role of ensuring testing is done, have it covered.

Instead focus on how the testing approach you use is aligned to the business strategy of the company and how your role help the business be successful.  Talk to them in terms of business and how you align testing to the business.  What value does what you do add or prevent the business from losing money or customers.

Business people are focused on risk and trying to avoid risk that impacts the bottom line. When talking to executives instead of saying we should not be automating everything; talk instead about the risks associated with attempting to automate everything.  The business costs to maintain or the risk of not uncovering information that could cause loss in value to the business.  Talking about checking and testing can be useful to help people understand the value of what we do.  It is important to present a balanced view and explain the benefit of both and how using both can mitigate risk.


Look at providing examples to the executives of the bad things that we are going to do to the customer if we do not test properly.   Ask them how you can help with their decisions to help your clients and protect business value. 

If your company is listed on the stock market, do you read or watch your company financial statements?  This gives useful insights to their important values.  Learning the financial language of your company can be useful when talking.  With this information you can now tailor your discussion around the value your testing provides to the whole organization. Many tester focus on the value that they provide or that the testing provides, instead focus on defining the value of the whole team delivering the product. Explain how the testing is aligned to delivering as a team rather than focusing on testing. Look at the company annual report; it highlights their risks and issues.  Understand what that is and align your testing to this.  

Most importantly, be prepared.  If you know you are going to talk to these people understanding what is important to them, what motivates and drives them. Having this information can help build a relationship around their aspirations and make them feel you understand them and what their needs are.

As Keith states “Focus on the big stuff and work back from there.” that is what the executives are really interested in.





Friday 16 October 2015

MEWT4 Post #1 - Sigh, It’s That Pyramid Again – Richard Bradshaw

This is the first in a series of posts I plan to write after attending the fourth MEWT peer conference in Nottingham on Saturday 10th October 2015.

Before I start I would like to say thank you all the organizers and for inviting me along and a BIG MASSIVE thank you to the AST for sponsoring the event,



Abstract: 

Earlier on in my career, I used to follow this pyramid, encouraging tests lower and lower down it. I was all over this model. When my understanding of automation began to improve, I started to struggle with the model more and more.

I want to explore why and discuss with the group, what could a new model look like?

___________________________

During the session Richard explained his thoughts about the test automation pyramid created by Mike Cohn in his book Succeeding with Agile and how the model has been misused and abused.



Richard talked about how the model has adapted and changed over the years from adding more layers..



...to being turned upside down and turned into an ice-cream cone.


Duncan Nisbet pointed out that this really is now an anti-pattern - http://c2.com/cgi/wiki?AntiPattern.  The original scope of the diagram by Mike was to demonstrate the need to have fast and quick feedback for your automation and as such focused the automation effort to the bottom of the pyramid. Where the feedback should be fast. The problem Richard has been experiencing is that this model does not show the testing effort or tools needed to get this fast feedback.  It also indicated that as you move up the pyramid less automation effort was needed or should be done.  The main issue for Richard was how the pyramid has been hi-jacked and used as examples of the priority of effort should be on automation rather than focus on the priority of both in given contexts.  

Richard presented an alternative model in which both testing and automation with the tools required could be shown on the ice-cream cone diagram.



With this diagram the sprinkles on the top were the tools and the flakes the skills.  He then in real time adjusted the model to say it would be better as a cross-sectional ice-cream cone with testing throughout the cone and the tools across all areas of the original pyramid.  Many attendees liked this representation of the model but some thought that it still encouraged the concept that you do less of certain testing activities as you move down the ice-cream cone.

At this stage I presented a model I had been using internally to show the testing and checking effort. 



Again people thought this indicated that we need to do less as we move up the pyramid and it went back to the original point being made by Richard that the pyramid should die.

After MEWT I thought about this problem and tweeted an alternative representation of the diagram. After a few comments and some feedback the diagram ended up as follows:



With this model the pyramid is removed. Each layer has the same value and importance in a testing context.  It shows that the further up the layers you go the focus should switch more from checking to testing and the lower down the focus should be on automating the known knowns. All of this is supported by tools and skills.  As a model it is not perfect and it can be wrong for given contexts, however for me it provides a useful starting point for conversations with those that matter.  It especially highlights that we cannot automation everything nor should we try to do so.

In summary the talk given by Richard was one of the many highlights of the day at MEWT and inspired me to look further into the test automation pyramid model and its failings.  I agree with Richard that this original model should die especially in the way it is often misused.  Richard provided some useful alternatives which could work and hopefully as a group we improved upon the original model.   Richard did clarify that his ice-cream cone model with sprinkles is not his final conclusion or his final model and he will be writing something more on this in the near future.  His blog can be found here - http://www.thefriendlytester.co.uk/.

Now it is over to you, please provide your feedback and comments on this alternative model.

Monday 12 October 2015

Testing Skills #5 - Remote Experiential Learning


Many people work with teams which are globally distributed, this has some logistical issues, one being how to implement useful and practical training approaches.  One common approach used is C.B.T. (Computer Based Training).  This is where participants login in and listen to pre-prepared exercises and videos, sometimes with a test at the end.  Another approach is to arrange a video session with an online tutor where they go through the material and the participants can ask questions whilst listening to the tutor.  These are OK as learning tools, but it is difficult for the participants to apply the knowledge learnt to their daily role. 

There is an alternative distance learning approach that I experienced whilst attending an online workshop run by The Growing Agile team (Samantha Laing and Karen Greaves).  I have since this course created my own remote workshop using this approach with some success.  What follows is an introduction to this approach.  Hopefully you can take this and adapt it for your own teams.

The basic principles of this remote training approach is based upon the 4Cs as described in the book “Training from the back of the room” by Sharon Bowman. Each of your learning elements should include all elements of the 4Cs in each module.

For each module of the course I create a workbook which goes through each aspect of the 4Cs.

The first ‘C’ is Connect

Before you start teaching the students ask them what they already know about the topic.  Create activities they can do offline to find out how the topic is relevant to their current role or what they currently know about the topic.

The next ‘C’ is Concepts

This is the traditional learning part, where you can introduce and explain what the topic is about.  You can do this as either a series of written articles or pre-recorded videos.

The third ‘C’ is Concrete practice

Students apply the concepts in practice.  If you are running this remotely you can set up activities and exercises related to the concepts which the students should, ideally, apply to their own working domain.

The final ‘C’ is Conclusions

This is best to done as a small group, maybe as an online video call.  All the students get together and discuss what they have learnt.  This is a great way to reinforce the learning since each person should bring different examples of applying the learning to the discussion and provide a more context rich learning experience.


When you are looking to create any remote learning experiences it is worthwhile making sure that each of your training sessions covers all aspects of the 4Cs. An advantage this learning approach gives is that it requires only a couple of hours of learning from each participant.  They can do this at their own pace and then discuss their learning and how it applied to them during a weekly hour long video conference call with the others taking part in the course.  It is crucial to set your expectations of the participants and get them to give a commitment to spending some time doing the exercises before the video call.   

As an additional option when I ran my remote workshops I set up a closed wiki site so that the participants could have discussions and provide some information about what they have learnt.   Also with permission from the participants I recorded the video sessions  and uploaded them to the wiki so they could go back and watch them later.

Monday 5 October 2015

Testing skills #4 - Note taking

 Why is note taking an important testing skill?

There are a variety of ways to capture the evidence of our testing but if are notes are not of suitable detail then the value of our testing can be diminished.  Taking notes enables us to improve our knowledge and reinforces our understanding of the product being tested. This is part of utilizing critical thinking skills which was discussed in the chapter on 'critical thinking'

Robert Lambert discusses the need to have good note taking skills when performing exploratory testing
"During a session a good exploratory tester will often narrate the process; all the time making notes, observations and documenting the journey through the product. This level of note taking allows the tester to recall cause and effect, questions to ask and clues to follow up in further sessions." 
Explaining Exploratory testing relies on good notes - Robert Lambert - 2013
Michael Bolton wrote the following about note taking when testing:
"One of the principal concerns of test managers and project managers with respect to exploratory testing is that it is fundamentally unaccountable or unmanageable. Yet police, doctors, pilots, lawyers and all kinds of skilled professions have learned to deal with problem of reporting unpredictable information in various forms by developing note-taking skills." 
An Exploratory Tester’s Notebook  - Michael Bolton - 2007
There are a variety of note taking methods which you as a tester can utilize.  This page has an example of a few of them. Note Taking Systems - Student Academic Services - Cal Poly

One method that I have found extremely useful especially when capturing information from conferences or for recording my findings when testing is the 'Cornell Method'

The Cornell method was developed by Dr Walter Pauk of Cornell University and is widely used by University students.  It is a very useful method to help you work out if you can remember what you have written.

First of all you need to create a layout for each page in your notebook as follows. Alternatively use this Cornell Method PDF generator.

The method has 5 steps.

1. Capture what is being said or what you observe in the note taking area
2. As soon as possible review your notes and capture key details in the Review(Cues) column, add any questions you may have thought of.
3. Cover up your notes only showing the review column and now try to summarize your thoughts based on the cues.  Provide answers to any of the questions you wrote in the review column.  Use the summary column at the bottom to summarize your understanding and learning.  If you are struggling with your summary it could indicate that your notes are not sufficient.
4. Ask yourself questions on the material both the cues and the notes. Think about how you apply this information to your work.  How does it fit with what you already know?
5. Spend some time reviewing your notes and summary every so often to reinforce your understandings.

It is crucial that as a tester you practice your note taking skills.  Poor note taking can lead to missed problems and hinder knowledge sharing with the team.  Your notes are what helps to turn your tacit knowledge into explicit knowledge