listeningTo: cranking air conditioners and hot pacing doggies

inRealLife: My phone decided it only wants to hold calls for 6-12 minutes, which was not very conducive for my very long sprint demo, estimation, and planning calls today. We use Zoom, cool, so I can just use Internet Audio. Oh, but then the internet went down for like 3 hours. Today is a hard day for my technologies.

whatIReadThisWeek: I read a lot of books. I would say it’s because of the industry I work in, the nature of my website, but, no, I’ve always been a bookworm. I finished Ill Will by Dan Chaon which was a really fast but intense crazy read, and started Dark Age by Pierce Brown this morning – I can’t stop talking about this series (Red Rising) and so I felt obligated to share here because I’m so freaking excited for this one.

But, okay, in tech news – a lot of this week has so far still been pre-occupied by broken release issues, more on that later, but I did read through a bunch of headlines and saved some articles for further reading if I ever have that luxury again. Topics include: deciding when automated testing isn’t the best solution, automating visual testing (I’m excited for this!), and a few articles about Google Tag Manager because in addition to QA Manager I am also the lead on many of my old Project Manager-responsibility projects and Google Tag Manager is that pest I cannot exterminate. “Find gaps and fill them,” everyone I spoke to about job hunting instilled in me. This is a gap that should have been left unfilled. I’m sure I’ll dedicate a post shortly as I embark on a big refactoring project.

whatILearnedThisWeek: “Amanda, do not certify a release if you cannot predict what will happen on production. Be persistent. Be annoying. Do.Not.Release.” That’s less learning as much as it is a warning for future Amanda. As you can see in my previous post, I had a lot of questions and hesitation around releasing our current build since it was so unpredictable how it would react out in the wild. I knew to be worried, and I tried to bring up my concerns on our stand up calls, but, no one else was as worried and I let it go. Don’t let it go, guys. If you have reason to think something may not work, make sure you prove yourself wrong before certifying the release. Or else.

Well, not really or else. But it would have saved a bunch of my colleagues and I a bunch of overtime and headache this past week. We had a lot of issues with testing the current release, and once it was released in the wild it BLEW.THE.HELL.UP. I griped about the lack of tools I have to use to predict similar situations from happening once we hit production, but at the end of the day, I found no good solution and was convinced the “let’s just see how it goes” method would work. It didn’t. We ended up having to roll back the release, which is the first time we have ever had to do that in the history of the company. Not cute.

In non-disastrous news, though, I did learn how to very basically migrate a Bootstrap 3 page to Bootstrap 4. My next post will be more about this for anyone who wants to hear more about how that goes. We have an on-going project to migrate each and every page to Bootstrap 4 now that 3 is no longer being supported, and I am trying to learn it from the ground up by hands-on touching each file and making those changes. I’m starting with really simple, static, text pages for now, but as I get more comfortable (what I really mean is once I build the trust of my team) I hope to tackle more complex pages.

whatIAmThinkingAbout: What would I have done? It was like knowing you were about to see a car crash a millisecond before the cars collide. I know on a personal level I need to work on being more assertive, finding my voice, and demanding authority where authority is due.

Professionally, though, I need to find a tool or a process for testing in production type environments. I KNOW most people have great solutions for this, maybe I just need to survey what is out there and plead my case. This last release fiasco may be the ammo I need to drive my point home.

recommendationsAndTakeAways: I’m sure it’s been covered. What’s done is done. After too many retrospective calls it was determined that no one was at fault, which I agree with. It’s hard to not blame the tester, and in the past when things go wrong it is usually the jerk-reaction to blame the tester, but it was acknowledged across the team that whatever went wrong – which to date is still unknown – is not happening on staging and could not have been caught by devs or testers in a non-production environment. Takeaways? Finding a production-like environment is more critical than ever.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s