Friday, 26 February 2016

Powershell and open-auth / braindump

another /dump.

will take some shape as a go along. Basic problem I want to solve is using powershell to pass authentication through when calling a rest service, but in reality the "passthru" part of it all is a bit of a mystery to me. And feels like it might well be impossible to do in a really secure fashion.

http://foxdeploy.com/2015/11/02/using-powershell-and-oauth/

Wednesday, 24 February 2016

I'm Hangry, so I decided to give up strangling people for Lent


Everyone is a tester

I stopped being a software developer about 5 years ago now. Well that's not entirely true, I have always been a tester. In fact everyone tests, but not everyone puts "tester" in their job-title; so if this is you, stick with me a sec.

So lent runs from February 10 to March 24th, and generally people will give up something over that time. It's normally a Christian kind of thing, but in general it's a mindfulness exercise that's good for any religious conviction. Or even if that's not your thing. My thing is not killing people who #$%*@ me off on a daily basis. Most people will be giving up chocolate, they may have tried a Dryathalon, or a "Stoptober", but generally cutting any distraction out of your life helps you. Getting hacked off when someone disappoints you is not healthy, and that is true for software testing as a role.

The Lone Tester

Last night I attended a DOJO Masterclass called "The Lone Tester" by the lead test engineer at Bitly, who has worked solo for much of her tester career. Jess Ingrassellino talks about skills and fresh learnings for anyone who works solo, as a contractor, or who is keen to start learning how to start a career in test or, to shift themselves from Manual into Automated testing. That sounds like a lot, but Jess has done it all in just 4 years, so it's all very well related; her delivery is centred around being the only tester in your organisation or division and having to make your own way. If you are the Lone Tester, Jess gives some tips on how to see yourself as not being alone in reality, since everyone is in fact thinking about quality, but not necessarily an expert. Which is what you are really there for with your tester hat on.
It's not in my opinion a talk with any great revelations in terms of content or process wisdom to any seasoned tester, but she does drop in some pointers for managing your time and workload better and these might inspire the old guard too. To see the talk recording, sign up in the Dojo at Ministry of Test.

Back to the Lean coffee.

I snagged the following topics (actual POSTIT notes visible in micro above with the donkey.) As always my summaries are my words and how I understood them. Everyone in the room hears these same words but takes them on slightly differently. It's called language, and in my case hard-arse.
Performance Testing: Should I test little and often or full-on and infrequently?
  • A few ideas about why the question is the wrong question came out here. long-running stress tests are more like regression tests in many ways and thus carry the same high costs. They also find completely different classes of defect. Knowing this in advance will clue you up on where to go in your strategy
  • You do need both, but understand why first and when first
  • Quick can never be replaced by deep testing, mainly because it can deliver test verdicts quickly and supports your CI process more directly
  • Deep testing delivers more accurate metrics than a quick performance test. But if you apply the same metrics gathering and performance gathering history analysis to your quick testing runs, you will get more value more often and sooner
  • Long running test are best run infrequently in your cycle, and only when you change something that could cause the problems associated with stress failures: Namely; changing the version of any 3-rd party component, major algorithm/architecture change or anything infrastructural that the architects identify as risky
For anyone with time to google around, maybe look for some tips at a recent #TESTBASH by Scott Barber.

I've been asked to test a spreadsheet WTF???
This one came from Alan I think it was, like all simple questions it raised good responses:
  • Sometimes you get asked to test something that you don't really want to test. This severely impacts your reaction and emotions can very quickly prevent you being effective
  • Analyse the business risks, work out the value to business. Then move
  • Gather some stats on how often the "spreadsheet" causes losses, present your findings in easy to consume form like a graph, so that people can understand the risks. Exposing actual size of any risk is your speciality as a tester

How to cope with Context switching and Time management

This was my topic, but more a question, since it's something I'm rubbish at. I was inspired by the tips that Jess Ingassellino shared in her "Lone Tester" masterclass.
  • Time your activities to fit in with natural times of the day, like lunch for people-time, mornings for firefighting, and afternoons for actual core work
  • Plan actual test "session-based testing" for set times
  • sprint, and capture stories
  • Use various todo listing tools

How do I assess my value as a tester?

A topic which Jess co-incidentally also touched on, funny how this kept coming up. Honest guv.
  • toot your horn
  • drive process, management expect you to make process changes that impact quality
  • protect revenue. It's not your job to sell the product. Not your job to fix bugs, nor even to find them. All you have to do is ensure customers don't find them.. well at least not the ones that make them select a different vendors' product
As a tester, you must always be asking questions. First, foremost and often. It's called left-shifting.

In closing; one of the topics not covered (there were a good few) was using "Selenium from absolute scratch". I think a few people are interested in getting a n00bs guide.

Tuesday, 16 February 2016

Test automation sticky-note (sic)

A quick note to make sure I do not loose a little idea I got while browsing recent presentation a STARWEST. The specific presentation I have in mind is here:
http://www.stickyminds.com/interview/five-patterns-test-automation-starwest-2015-interview-matt-griscom?page=0%2C0

Matt Griscom links you to his website and a download for the .NET Framework based tool he created. I believe it warrants a try, because although it glosses over some problem-domain specific areas for me, it seems to take account of a lot of the automation framework gotchas I currently face.
His blog is http://metaautomation.blogspot.co.uk/ and the download is hosted here http://metaautomation.net/ .

Basically I face a problem where my current automation system is not flexible and powerful enough, so requires fragile customization. Stable over the short term, your test code breaks every time the framework revision changes. Which has to happen as you integrate common or shared test-code down into a pattern in the toolstack or into a shared library. All this work needs to be designed and to be planned to reduce maintenance load. Matt seems to recognize many of the related problems I face there, although maybe not solving them, the act of identifying them helps us a lot. Basically he takes the approach "measure everything". Make it easy to mine all data and suddenly you can do comparative and performance testing as well as predictive triage.

So, just a quick note before I forget all about this angle.

Thursday, 28 January 2016

Why testing fails (the short of it)

I was asked to try take the speaking place of a colleague, and talk on this for CEWT #2. Cambridge Exploratory Workshops in test.
I was initially just hoping to get onto the reserve list, then someone dropped out after I wrote this. So here we go.
Not the kind of thing you want to admit having first-hand experience in when you work for a company falling into the top 100 of almost every desirable list. I'll share my 2 reasons on the topic "Why testing fails".
It's not possible any-more to book a place at CEWT #2, basically because the workshop is limited in size. But if you want to find out more, do get along to a lean coffee morning, just google for the (real) Cambridge meetup "software testing club lean coffee"
To be on 28th February at DisplayLink Cambridge. Contact JamesThomas @qahiccupps .

Rushed Implementations 

“Look before you leap” comes to mind.
  • Features without right hygiene loose in the quality department
  • Feature does not solve customer a problem and becomes harder to test
  • Test not involved early enough

Test Planning 

“Fail to plan, and plan to fail", going around in vicious circles comes to mind.
  • Close-down cycle with no resources planned or budgeted for it
  • Planning impacted by rushed implementation
  • Planning is easier than you think (with good data to support it)
 

Saturday, 23 January 2016

humble bundle green screen challenge

What's the Humble Green Screen Challenge?
Inspired by FMV games, this event allows you to take a crack at making your own full motion video.

How should you make the videos?
We're providing some sample footage that you can use. All we ask is that you somehow involve that. There aren't any prizes to this challenge, so the rules are pretty darn loose.

Rules:
https://support.humblebundle.com/hc/en-us/articles/215590188-Humble-Green-Screen-Challenge

YT demo clip: https://www.youtube.com/watch?v=BnyZHvZya7I&feature=youtu.be
My demo clip: https://youtu.be/_jfySBKSjQM
A bit like those DVD games where the DVD plays a clip then asks you a question, if you press "left", it plays another clip, if you press "right" it goes a different way. A bit like the make your own story choice skip to page X novels.

Stuff I learned along the way:
How to do Dolby in VideoStudio : https://www.youtube.com/watch?v=GIR7ljJh7eE
How to get 6 tracks (Dolby 5.1) from a stereo track in Audacity : https://www.youtube.com/watch?v=zu37UaVlLJE
The audio results are not great- mostly due to not having any Dolby or surround equipment.


Chroma and background sources:
  • https://www.youtube.com/watch?v=uL_Q0uRxMOA : Alex Free Stock Video Footage - Full HD - Fast Night Street
  • https://www.youtube.com/watch?v=O9KYVLKCovU : Alex Free Stock Video Footage - Full HD - Animation - Disco Light 
  • https://www.youtube.com/watch?v=Bo7flVvnCgw : Alex Free Stock Video Footage - Full HD - Highway - Italy - Monte Carlo - GOPR0255
  • https://www.youtube.com/watch?v=UOqJrllL2Ec : Ufo Alien Spaceship Fly By - free green screen 
  • https://www.youtube.com/watch?v=_RV8DkZqXI4 : fond vert ovni HD - Greenscreen UFO 1080HD
  • https://www.youtube.com/watch?v=R3aA6TqvNg0 : Free Stock Footage_ Fish Swimming in Ocean Kelp Bed
  • https://www.youtube.com/watch?v=--ze-88FZY4 : Galactic Journey in Space - Royalty Free Footage

Scoring time!

How do you rate my clip against some of the other subs?

Wednesday, 20 January 2016

Cambridge Lean coffee | Towers Watson

After not seeing the crowd of happy testers over the hectic Xmas break, a trip to Sawston was a welcome way to kick start 2016 with a drive-by to the south of Cambridge.
The "Testing" started, when I got picked up on my visitor registration badge right after arriving, because I had dated it 2012. Which was a good thing, because if I had dated it 2015, I would have been investigating the occurrence of an "off-by-one" defect.

The "checking vs Testing" did not stop there, but let's crack on.
We covered the topics which I paraphrased badly in order to fit them in a hurry onto the well scoped but limited surface-area of a post-it. We formed 2 groups, so these notes are from Chris Georges' table only.

Why pay to have a tester?

Or rather, at which point do we need a tester? Some companies test in the traditional way - they have automated unit tests and things are just fine for them right now. Some teams if small enough will get by just fine for a while. But without the specialist skills a specialist tester brings, all you have is someone who knows how to check stuff and how to write stuff. A professional tester is a integral part of the team and will be involved with requirements, which design review, and be able to get the correct level of detail in a test plan. You do have a test plan? right?
A professional will have the bandwidth to execute all the testing in the background when the developer is busy trying to fix a large list of bugs 2 days before the release deadline. This might also be called shielding your developers - something your support team might be doing right now already.
Did I mention, testing does not actually stop after the ship-party? Having a person on your team who knows that testing is not 100% about running test cases, but is also about helping you judge risk as well. A dedicated tester allows you to get the right detail level in your QA, because it enables a different perspective.
A good tester is a important part of a team, like a cog in a clock, it's important to make sure it is unique and just the right size for the job.

How do I automate legacy code testing?

It's really hard to do, and I can offer some tips on how to do this using clever instrumentation in ways that does not require code changes all over the place. But the question elicited these responses.
1. Prioritize your testing : P1=urgent , P2=less urgent and so on. This lends structure to what you are doing as well.
2. Be methodical - look at the test script (you have a script, right?) and analyze it for high probability blockers - try to ensure that you run things that can block as early as possible. This lets you push blocker into the developer early and buys DEV more time to resolve a blocker while you test down a different path.
3. Do session based testing. This is going to let you work through a weak test plan and by logging your session you will improve not only future test iteration estimates, and thus be able to time the testing to fit a release closedown. but it will also let you see which sessions and thus which features are the most needing testing based on how many bugs you recorded in a session. Excel is a great tool for recording.
4. Traceability - this is going to come out of the above steps.

Ultimately a deep understanding of what features are dependant on what components of the product will guide you to estimate which areas do not need more re-testing due to simple lower risk. risk is driven almost entirely by code churn. So components with minor change tend to break less- but only because of the interface or environment effects.
My tip on how to avoid re-testing legacy code, is to catalog how the environment impacts the features in the product. If environment plays a big part, study the impact and adjust your plans accordingly.

Specific to automation, instrumentation is an avenue worth exploring as a way to automate testing of legacy code, without touching the code-paths. Maybe I'll write something on this in future.

Which GUI tests should I automate?

Since this is a very common automation question, and the dangers are understood, I'll talk a bit more about ROI.
1. Paring it back , taking a good look at what to not automate by identifying the high priority cover areas
2. What things are hard to test manually, automate those first. What tests deliver most value if automated - things like product install /deploy or launch can be easy to automate and unblock your product development (CI system) quickly.
3. Talk about testing earlier - by forcing devs to thing about testing (manual and automated), you involve dev early, and get them to think more like testers. Make the end-application easier to test also makes end-users lives easier too in many cases.
4. Don't automate unit tests - basically system test, test the behavior, not the code!
5. Don't burn out your testers! Gettign testers to run manual tests all day will drive them a bit nuts, identify tests that drive them nuts and try automate those.
 I used a score-sheet (Excel to the rescue again) to decide when to automate. It looks like this.

























We have 3 dummy cases here
Each test case or "TCD" will have a script (paper or electronic).


 Score each question (criteria) from 1-5. anything that scored less than 5 overall is just never automatable, anything getting over 50 might be, and so on. You get the idea. This screenshot omits the "weighting" applied and a few other "gating" criteria which link in with the PDLC used where I work at the moment (Citrix ltd.) . But you get enough of the picture.






Monday, 28 December 2015

Contemplative video games Part II

Back, with the round-up and value part of the summary. In Part 1 I rounded up 12 titles that approximately fit the bill of games that might calm the beast within. They were Dear Esther, Bastion, To the Moon, Trauma, Mind : Path to Thalamus, A new Beginning : Final cut, The Novelist, Year Walk, Proteus, Eidolon, Gone Home, The Graveyard.

Playability

 Do the game mechanics make sense, is the keyboard+mouse usable? Are you asked to achieve incredible acrobatics with the mouse to click on a tiny item? Is the game repetitive and non nonsensical at times frustrating in terms of UI? A score of 5 means good and obviously 0 = unusable.

-->
Dear Esther5
Bastion5
To the Moon5
Trauma4
Mind : Path to Thalamus5
A new Beginning : Final cut4
The Novelist5
Year Walk5
Proteus5
Eidolon5
Gone Home5
The Graveyard2

Value

Price versus all the rest of the scored factors. Anything over £5 is penalised due to playtime expectations, and anything that has a free complete demo like the browser-based option to play Trauma will get a 5 for excellent value.

-->
Dear Esther£6.994
Bastion£2.74 (demo)5
To the Moon£1.395
Trauma£4.39 (demo)5
Mind : Path to Thalamus£2.993
A new Beginning : Final cut£0.795
The Novelist£5.494
Year Walk£4.792
Proteus£2.793
Eidolon£5.504
Gone Home£4.495
The Graveyard£3.99 (demo)2


Artwork : Designed by Freepik - http://www.freepik.com/free-photos-vectors/school