Thursday, December 7, 2017

Becoming a Feedback Fairy

Late in the evening of a speakers' dinner at CraftConf 2017, I met a new person. He was a speaker, just like me, except that when he asked on what I would speak on, he used the words: "Explain it to me like I am not in this field, and I don't understand all the lingo".

I remember not having the words. But this little encounter with a man I can't even name made it into my talk the next day when I first time introduced myself in a new way:

"I'm Maaret and I'm a feedback fairy. It means that I use my magical powers (testing skills) to bring around the gift of deep and thoughtful feedback on the applications we build and the ways we build them. I do this on time for us to react, and with a smile on my face."

That little encounter coined something I had already been coming to from other ends. There were two other, prior events that had also their impact.

At DevOxx conference some time ago, I did a talk about Learning Programming. Someone in the audience gave me feedback, explaining that they liked my talk. The positive feedback as it was phrased made an impact, as they expressed that they'd ask me to be their godmother, unless that place was already up for grabs for J.K. Rowlings. As a dedicated Harry Potter fan, being next on anything from J.K. Rowlings is probably the nicest thing anyone can say.

As I received this feedback, I shared it with the women in testing group, and a new friend in the group picked it up. As I was doing my first ever open conference international keynote, she brought me a gift you can nowadays see in my twitter background: a fairy godmother doll, to remind me of my true powers.

For the Ministry of Testing Masterclass this week, I again introduced myself as a feedback fairy.
You can be a feedback fairy too, or whatever helps you communicate what you do. There's an upside on being a magical creature: I don't have to live to the rules set by the mortals. 

Friday, December 1, 2017

Sustainability and Causes of Conferences

Tonight is one of those where I think I've created a monster. I created #PayToSpeak discussion, or better framed, brought the discussion that was already out there outside our testing bubble inside it and gave it a hashtag.

The reason why I think it is a monster is that most people pitching into the conversation have very limited experience in the problem that it is a part of.

My bias prior to experience

Before I started organizing European Testing Conference, I was a conference speaker and a local (free) conference organizer. I believed strongly that the speakers make the conference.

I discounted two things as I had no experience of then:

  1. Value of organizer work (in particular marketing) in bringing in the people
  2. Conference longevity / sustainability
Both of these things mean that the conference organizers need to make revenue to pay expenses while the conference itself is not ongoing. 

Choices in different conferences

My favorite pick on #PayToSpeak Conferences is EuroSTAR, so let's take a more detailed look at them.
  • A big commercial organization, paying salary of a full team throughout the year
  • Building a community for marketing purposes (and to benefit us all while at it) is core activity invested in
  • Pays honorarium + travel to keynote speakers
  • Pays nothing for a common speaker, but gives an entry ticket to the conference
  • Is able to accept speakers without considering where they are from as all common speakers cost the same
  • Significant money for a participant to get into the conference, lots of sponsors seeking contacts with the participants
I suspect but don't really know that they might still have revenue of the conference after using some of the income on running the organization for a full year. But I don't really know. I know their choice is not to invest in the common speaker and believe it lowers the quality of talks they are able to provide. 

Another example to pick on would be Tampere Goes Agile - an Agile conference in Finland I used to organize. 
  • A virtual project organization within a non-profit, set up for running each year
  • No activity outside the conference except planning & preparation of the conference
  • Pays travel to all speakers, can't pay special honorarium to keynote speakers
  • Runs on sponsors money and stops when no one sponsors
  • Is not able to get big established speaker names, as they don't pay the speakers
  • Requires almost zero marketing effort, straightforward to organize
  • Free to attend to all participants
Bottom line


PayToSpeak is not about conferences trying to rip us speakers off when they ask us to cover our expenses. Conferences make different choices on the ticket price (availability to participants with amount of sponsor activities) and investment / risk allocations.

Deciding to pay the speakers is a huge financial risk if paying people don't show up.
Paying speakers travel conditionally (if people show up) does not work out.
Big name keynote speakers expect typically 5-15k of guaranteed compensation in addition to their travel expenses being covered.

Conferences decide where they put their money: participants (low ticket prices), speakers (higher ticket prices with arguably better quality content), keynote speakers (who wouldn't show up without the money) or organizers (real work that deserves to be paid or will not continue long).

#PayToSpeak speaks from a speakers perspective. We can make choices of being able to afford particular conferences due to speaker-friendly choices they make.

Options

If we understand that there are two problems #PayToSpeak mixes up, we may find ideas of how to improve the current state:

  1. Commonly appearing (but not famous) speakers need not to Pay to Speak to afford speaking.
  2. New voices with low financial possibilities need not to Pay to Speak to afford speaking. 
If some conference does relevant work for 2, as a representative of 1 I would consider paying to speak. But I would have to choose like one per year, because that is not out of my company's pocket, but my own. 

If some conference collects money for a cause in a transparent way, I again would consider paying to speak, capping the number I can do in a year. 

There are options to removing Pay to Speak:
  • Seek local speakers (build a local community that grows awesome speakers), and paying the expenses is not a blocker as the costs are small
  • Commit to paying speaker expenses, but actively invite companies they work for to pay if possible to support your cause. See what that does. 
  • Set one track to experiment with paying expenses and compare submission to that track to others, with e.g. attendee numbers and scores. 
  • Say you pay travel costs on request, and collect the info of who requests it with call for proposals
  • Team up with some non-profit on this cause and give them money for scholarships for some speakers. 
You can probably think some more. 

Conferences, none of them are inherently evil. Some of them are out of my reach as they are #PayToSpeak. And I'm not a consultant, nor work for a company that finds testers their marketing group. If I have to #PayToSpeak, I can't. I will remain local, and online. 

There's people like me, better than me, who have not started off by paying their dues of getting a little bit of name in some #PayToSpeak conference. I want to promote them the options of not having to #PayToSpeak. 




Why defining a conference talk level means nothing

Some weeks back, unlike my usual commitment to follow my immediate energy, I made a blogging commitment:
The commitment almost slipped, only to be rescued today by Fiona Charles saying the exact same thing. So now I just get to follow my energy on saying what I feel needs to be said.

Background
 
As you submit a talk proposal, there's all these fields to fill in. One of the fields is one that asks the level of the talk. The level then appears later as a color coding on the program, suggesting to be the among three most important information people use to select sessions. The other important bits are the speaker name (which only matters if the speaker is famous) and the talk title. On how to deal with  talk titles, you might want to check out the advice Llewellyn Falco put together.

The beginner/intermediate/advanced talk split comes in many forms. Nordic Testing Days in particular caught my eye with the "like fish in the sea", "tipping your toes" metaphoric approach, but it is still the same concept.

The problem

To believe concepts like beginner/intermediate/advanced talk levels are useful, you need to believe that we compare people in a topic like this.
This same belief system is what we often need to talk about when we talk about imposter syndrome - we think knowledge and skill is linearly comparable, when it actually isn't.

The solution

We need to think of knowledge and skills we teach differently - as a multi-dimensional field.

Every expert has more to learn and every novice has things to teach. When we understand that knowing things and applying things isn't linear, we get to appreciate that every single interaction can teach us things. It could encourage the "juniors" to be less apologetic. It could encourage the "intermediate" to feel like they are already sufficient at something even if not everything. And it could fix the "experts" attitudes towards juniors where interaction is approached with preaching and debate, over dialog with the idea of the expert learning from the junior just as much as the other way around.

So, the conference sessions....

I believe the best conference sessions even on advanced topics are intended for basic audiences. This is because expertise isn't shared. We don't have a shared foundation. Two experts are not the same.

It's not about targeting to beginner / advanced, it's about building a talk on a relevant topic so that it speaks to a multi-dimensional audience.

As someone with 23 years of industry experience, even my basic talks have some depth others don't. And my advanced talks are very basic, as I need to drag entire audiences to complex ideas like never writing a bug report again in their tester career.

We need more good talks that are digestible for varied audiences, less of random labeling for the level of the talk. In other words, all great talks are for beginners. We're all beginners in the others perspective. 

Wednesday, November 29, 2017

Not excited about pull requests

There was a new feature that needed to be added. As the company culture goes, Alice volunteered for the job. She read code around the changes to identify the resident local style in the module, knowing how much of an argument there can be over tabs and spaces. She carefully crafted her changes, created unit tests, built the application and saw her additions working well with the product. She had the habit of testing a little more than the company standard required, she cared a lot of what she was building. She even invited the team’s resident tester to look at the feature together with her. 

All was set except the last step. The Pull Request. It had grown into a bit of a painful thing. Yes, they were always talking about making your change set small to make the review easier, and she felt this was one of the small ones, just as they were targeting. But as soon as her pull request was created, the feedback started.

If she was lucky, there was just the comments on someone’s preferred style on formatting - over all the codebases they were working on, there still was no commonly agreed style and automatic formatting and linting was only available on some of the projects. But more often than not, every single pull request started a significant thread of discussions of options of how the same thing could be done. 

Sometimes she would argue for her solution. But most of the time, she would give in, and just change as suggested. Or commanded, she felt. It was either a fight or submission. And the one with the power, reviewing the pull requests, somehow always ended up with their way. 

After the rewrite to the solution of the people leaving comments, Alice quickly runs unit tests. But that’s it. The version that ends up in production does not get the tester’s eyes before merging, nor the careful testing in the product context Alice put on the first version she created. 

Another time, her change was just a fix on something that was evidently broken. The pull request rumba started again, with requirements of cleaning up everything that was wrong around the place that had been changed. This time she gave up again - accepting the rejection of the pull request, someone else would get to enjoy the next try of fixing. The “perfect or nothing” attitude won again. 

When Alice was free to review Bob’s pull request, she too mimicked the company culture. “This is the way it works”, she thought. She felt that if she said less than others, it would reflect badly on her. So she said as much as she could say. Shared other way of implementing things. And just as Alice would change her code to comments, so would Bob. Knowing the difference of “here’s another way I thought of, for your information” and “I won’t accept without you changing this” had become impossible to differentiate. 

This story is fictional, and Alice (and Bob) was just the person that got on this fictional project. But the dynamic is very real. It happens to developers with all levels of experience, when the culture around pull requests goes into aiming for perfection instead of good enough. It happens with the culture of delayed feedback with pull requests, with refusal to pair. There’s many ways of implementing the same things, and sometimes arguing about my way / your way AFTER my way was implemented gets overwhelming. 

Here’s what I’d like to see more:
  • suggest changes only when that is needed not because you can
  • improve the culture created around “acceptance” power dynamic and remove some of the power reviewers hold as guardians of the codebase
  • when suggesting extensive changes, go to the person and volunteer to pair. 
  • volunteer to pair before the work has been done
Writing this reminds me how nice it was to mob on some of the code, when the whole pain and motivation drain related to pull requests was absent. 


Tuesday, November 28, 2017

Camps of testing

The testing world is divided, sometimes I feel it is divided to an extent that can never be resolved. Friendly co-existence seems difficult. Yet, the effort to work together, and to hear out the others, to have the crucial conversations about something all the divided camps in their own way hold dear needs to happen.

I think these camps are not clearly formulated, but they feel very real when you end up in disagreement. So I wanted to write about the three that came my way just today.

Testing should be a task not a role -camp

There's a significant group of people that have hurt my feelings in agile conferences telling me I am no longer welcome. Testing should be a task not a role is usually their slogan. They seem to believe that professional testers and testing specialists should not be hired, but all programming and testing work should be intertwined into teams of generalists. Of course generalists are not all made of the same wood, so often these generalists can be programming testing specialists, but the part about programming is often the key.

I've seen this work great. I can see it might work great even in my team. But then I probably wouldn't be in my team. And my team couldn't say things like "things were bad for us before Maaret joined". Because the things I bring with me are not just the stuff around automated or exploratory testing of the product, I also tend to hold space for safety and learning. And I do work with code, but more like 20% of my time.

I hypothesize that if this was the dominant perspective, we would have even less women's voices in software development. And I would choose having diverse views through roles over homogenizing any day.

Testers vs. Developers -camp

This is my reframe of the group of people I would think of as testing traditionalists. They're building a profession of testers, but often with the idea of emphasizing the importance of the job/role/position through pointing out how developers fail at testing. They make jokes on test automation engineers being not developers not testers (bad at both). They often emphasize the traditional tester trainings and certifications. They don't mean to set us up to two camps, but much of communication around us and them feels very protective.

I have seen this be commonplace. I have not seen it work great. Creating separate goals for testers (go find bugs!) and developers (get the solutions out there as specified!) isn't helping us finish on time, and making awesome software.

Developers working with testers in this camp have a tendency of becoming religious about Testing should not be a role -camp, if they care for quality. If they just work here and do what is told, they probably will live with whatever structure their organization put them into.

Testers and Developers -camp

I would like to believe there is a group of people like me, who don't identify with either of the camp archetypes defined above. They believe there can be professional testers (profession/role/job/position, whatever you call it) and some of them can be awesome with automation focus, and some of them can be awesome with exploratory testing focus. They might cross role-task boundaries frequently, in particular through pairing. The keyword is collaboration. Bringing the best of us, a group of diverse people with differing interests showing up and differing skills areas, into the work we are doing by collaborating.

This group tends to shift left, until there is no more left or right as things turn continuous.

Where does this lead us? 

As with the stuff around schools of testing, this is putting people into boxes that are defined through trying to describe what is good about the way I think. I will continue to evangelize the idea of letting people like me - and people like me 5 years ago - and people like me 20 years ago - enter the field and learn to love it as I have learned to love it. I know I make a positive difference in my projects. I belong here. And I know others like me belong here.

I want to see us thinking of ways of bringing people in, not closing them out. I'm open for new ideas how that could be possible for those who realize they want to be programmers only after they have become excellent through deep, continuous learning of things that are not programming but make us excellent exploratory testers. And it might take some decades of personal experiences. 

Playing with rotation time in a mob

A common thing to happen in a retrospective after Mob Testing is that someone points out that they feel they need more time as designated navigator / driver, "I need more time to finish my thought". Today, it happened again.

I facilitated the group on a 3-minute timer. I emphasized the idea that this is not about taking turns on each one of us executing our individual test ideas, but it's about the group working on a shared task, meaning same ideas, bringing the best of each of us into the work we're doing.

On two retrospectives with a 3-minute timer, the feedback was the same: make the time longer. So I did what I always do in these cases: I made the time shorter, and we moved to 2-minute rotation.

The dynamic changed. People finally accepted it wasn't about finishing their individual task, but to finish the group's shared task.

A lot of times when people feel they need more time, they are saying they have their own individual idea that they don't share with others. Longer time allows this. Shorter time forces the illusion of that being useful out of the group. 

Monday, November 27, 2017

A Search for Clear Assignments

I spent a wonderful day Mob Testing with a bright group of people in Portugal today. They left me thinking deeply on two things:

  1. Importance of working in one's native language - the dynamic of seeing them work in English vs. local language was immense. 
  2. Need for clear plans
I wanted to talk a bit about the latter.

I've been an exploratory tester for a long time. I've worked with projects with very detailed specifications, with the end result of having a system that worked as specified 100% but filled 0% of the real use cases the system was intended for. I've worked with projects that don't bother with specifications at all - all is based on discussions around whiteboards. And I've worked with projects where all specifications are executable, with the experience that to make it possible we often like to minimize the problem we're solving to something we can execute around. 

The first exercise we do on my mob testing course involves an application that has very little documentation. And the little documentation it has, most people don't realize to go search for. The three sessions we did gave an interesting comparison.

First session was freeform exploration, and the group was (as usual) all over the place. They would click a bit here, move to somewhere completely different, check a few things there and make pretty much no notes other than bugs I strongly guide them to write down. The group reported as experience that they were "missing a plan".

Second session was constrained exploration by focusing on a particular area of functionality. The group was more focused, had hard time naming functionalities they saw and finishing testing of things they saw. Again the group reported as experience that they were "missing a plan" even if the box kept them more cohesive in the work they shared. 

The third session was tailored specifically for this group, and I had not done that before. I allowed the group 30 minutes to generate a plan. They selected a feature with a web page claim after discussing what the unit of planning should be (use cases? user interface elements? claims on the web page?). Before spending any additional time hands on with the application on top of the two sessions earlier that had barely scratched the surface of the feature, they cleared up their plan. 

The interesting outcome was that
  • They found less bugs
  • They were happier working against a recreated (low quality) checklist
  • They missed more bugs they saw while testing and dismissed them as irrelevant. 
  • I saw some of the symptoms they saw as symptoms of something really significantly broken in the application, and having seen them test, I now know how I could isolate it. I suspect there are only a few people in that group who would know what needs more focus on. 
I take this as a (trained) wish for clear assignments, clear answers and generally a world where we would have our tasks clear. I find myself thinking it is not the testing that I know, and that it is the testing a lot of my automator colleagues know. And that in getting out of that need of someone else placing us the "plan" and being active about making and changing our own plans as we learn is the core to good results. 

We all come from different experiences. My experiences suggest that my active role as a software learner is crucial. Having documentation and plans is good, but having the mindset of expanding those for relevant feedback is better.