Reviewing the video lectures in HCI course published by Stanford professors on Coursera, I’ve learnt about the Wizard of OZ technique. The idea is that something is being shown as a functionality/feature while in reality it’s being handled/done by a Wizard behind the scene.
More information can be found here
In reality we have been using this technique very often, especially during demos. The idea is to code/implement a limited set of scenarios and demonstrate them only without a demonstration of anything not functional. The key difference is that, in the lecture the professor mentions it’s best to tell the truth and inform about the gaps at the end of the experiment but in reality this is something we don’t do.
The lecturer also compares low-fidelity vs high-fidelity prototypes. The point is to use low-fidelity at the start of the project to attract criticism and comments and eventually target to a highly usable and attractive product. Users are more reluctant to criticize or voice their concerns about flaws when something is done nicely even though it’s buggy or half-cooked
As said by the professor during the “Think again” course on Coursera, we use assurance just because we have limited time. So basically, sometimes it’s necessary to cut the crap and put the foot down before spending a ridiculous amount of time investigating all of the possible scenarios.
Very good point and another missing part in my BA repertoire
This is actually worth a separate blog and many separate discussions but I will at least start 🙂
As Henrik Kriberg puts in his book ” :o”
In Scrum from the trenches, Henrik describes the two approaches they’ve followed:
- Separated teams (country based)
- Separated team members (product based)
Right now we have a slight mixture of both with separated team approach prevailing. For now it’s unclear how to make this better so I will come back to this eventually
According to H. Kniberg’s “Scrum from the trenches”, their teams have had a weekly meeting known as Scrum of scrums where Scrum Masters from all of the teams (and products) gathered together to discuss the progress of each time.
Here is the link to Mike Cohn’s article on the Scrum of Scrums
In our project we have something similar and known as the product coordination meeting where Product Owners (we don’t have actual Scrum Masters) and their proxies (people like me) get together on gotomeeting to discuss where we are with the products. The purpose is to review user stories planned for the release (we don’t have a formal timeboxed sprint) and determine where we are in terms of completing features and not just user stories or development tasks.
Generally, these meetings are quite useful but a few items that we have now is an absence of any check of release health i.e. progress according to the schedule vs actual results.
This is a part of our other general problem – absence of a centralized requirements and project management approach (not just software but actually a process of handling requirements, team and project management).
Nevertheless, in this sense our team is in a good spot as we have product calls on a regular basis (actually twice a weeks but for different audiences) and we cover the two important questions:
- what progress was made in the last week
- concerns if any that should be raised
We are using Confluence to gather and store internal and client-facing requirements documentation. We are setting up release dedicated pages with the list of features for the release (committed, under discussion, completed) and review the resulting table on a weekly basis.
- difficult to prioritize as no visual indication of estimates for the feature (other than importance – high, med, low)
- as we don’t prioritize thoroughly, the implementation schedule for each feature is unclear
- we have a dashboard that explains each team member’s workload (based on tasks assigned from JIRA) but it’s not tied to the release to any extent
Product Team Structure and Product Calls Audience
For a person with a hammer, everything is a nail. After trying planning poker for the first time, we have been trying to see how it can be used more extensively. And almost immediately we have run into a wall. With stories that are related to a very specialized domain, there is a learning curve that may significantly increase the time needed for making story point estimates. The question then is how meticulous the team should be in their estimates. If the hit-miss is around 80% of the anticipated work volume, this should be not bad at all but it depends on a few things
- how well does the team know the subject of discussion (in our case, SCORM that only one or two people out of our 10 people team have ever dealt with)
- how strong is the technical background of the team members involved into the discussion
- how busy the team members involved into the discussion are (if they are busy, they may prefer to give higher point value to a feature to defer it until the next sprint or so)
Today, we’ve had our first planning poker game involving 3 developers (one of them an architect) and myself. The architect who also performs local project management role had told me he liked the idea and it helped with the estimates however I still want to outline some of the things that caught my eye
- We have tried to estimate tasks and not user stories while in theory planning poker is used for estimating user stories that carry a certain business value
- The user story we have been referring to is actually an epic and should probably be decomposed into a set of smaller stories
- Instead of using 1 point as a “perfect developer day” we have tried setting a smaller scale – 1 point being equivalent to 4 hours of work
- Guys who are not familiar with the subject have been reluctant to provide their input on the estimates.
- Differences in estimates is what’s causing the discussion and the actual brain work. I saw that happen and saw how a person’s explanation of his position on the provided estimate (that was significantly higher than anyone else’s) was sound enough to bring everyone to the same conclusion
- It seems that generally, teams who have been working together for some time, tend to give estimates in a smaller range i.e. in the game today I haven’t seen deviations of more than 2 points
In the books on Scrum, Agile that I’ve read, the authors described that in planning poker games, the team or a scrum master just read out the story and start estimating. It does not seem to work well for highly specialized projects like our own where different individuals deal with different part of the business domain. As a result, we have 1 or 2 team members capable of making reasonable estimates and actually deliver what’s needed in specific areas like SCORM. Others, who don’t possess the same knowledge as these folks do, are less involved into explaining their position and wait to hear what more experienced members say.
Overall, it was a pretty cool experience and I look forward to introducing it on a sprint level for 1 or 2 of our subteams