Could you share any statistics or metrics from this engagement?
I've been doing software engineering for close to 20 years, and in only two of the positions did I not use either contract QA or contract development work. Compared to other collaborations I've had, I actually rank OnPath highly. I would recommend them without hesitation. The nice thing in my experience with Brian is that he's very good at being flexible with how we wanted to manage the team. I've worked with companies where their only model was to have a project manager who we had to work through to assign tasks, and it's just been an inefficient way to manage the small numbers of resources we had.
Brian was very flexible about the model that we were using. He was good about taking feedback. At one point, we had a test engineer who initially was performing well, and then it seemed like there was a focus issue or something was going on. He was good about addressing that, and then eventually helping us find a different person to fill that role and getting a better resource in place. In general, openness to making a difference and moving on the fly has been great. I'm a huge fan.
We develop new products on about a six-month basis, so about two or three times a year, we're launching what we call a new video mashup. It's a new marketing site focused around whatever the new product line is. Historically, prior to putting a focus on it, ramping up QA and putting OnPath on, and creating the structured QA methodology that we have right now, we would put those launches out, and invariably, immediately we found issues with either page content, incorrect presentation for customers, or problems in completing an order. Frankly, I don't have those issues anymore, in part because of process changes, but also just around being able to add the staffing needed to make sure we have good coverage.
The second piece, from my standpoint, is just responsiveness and turnaround time for projects associated with the marketing team in particular. The ability to add contingent staff, and have that resource layer that I can grow as needed when projects come in, has helped me keep that responsiveness in turnaround time. When I came in the company, it was a significant talking point around how the software engineering team could support the marketing team better, and why we can't get stuff done. That conversation has pretty much ended. In most cases, now we really are beholden to the ability of the marketing team to create the copy that actually needs to go up online, versus being able to get the technical work done and test it.
At an execution level, they've been very focused on the test work. Honestly, that's partly me focusing them in that area, and really driving the strategy for QA process evolution internally. The flipside being, Brian and I had several very good, overarching conversations about where the relationship is going, what's our focus longer term, looking at broadening test automation, whether that's something that OnPath could support and help with. There's definitely strategic insight there, even though I haven't particularly leveraged it in the engagement with them.
How did OnPath Testing perform from a project management standpoint?
The project management tool we use is JIRA, the cloud-based version. We have a very organized bug-tracking workflow. Brian actually helped establish that as the bug-tracking tool in the company before I came to the company. That was definitely, from my standpoint, one of the nice things that OnPath helped to drive, prior to me coming in.
What distinguishes OnPath Testing from other providers?
Like I said, it has to be their management approach and flexibility during project work. OnPath Testing has been very accommodating to us when things have come up unexpectedly. They're never afraid to give us crucial feedback, and they have made some key recommendations around structuring and improving our QA environment. I've been very grateful for that support.
Is there anything OnPath Testing could have improved or done differently?
The only thing is setting expectations with the contingent staff that they bring in and making sure those are aligned with my own expectations. When we do those product launches that I was talking about, we tend to move into what we call our launch schedule, where we essentially have development and QA staff available – if not online, working, but available – about 20 hours a day, seven days a week to be able to do changes.
We run a lot of A/B split testing at that point in time. At 11 o'clock at night, I need somebody available to rotate into the next test, to make sure that from a testing standpoint everything's functional. We had a couple of bumps with people who didn't understand that expectation upfront, and it was a little bit of a reset. That hasn't happened in the recent past. Brian and I have discussed this, and I think he has kind of taken it to heart.