Synopsis
TestTalks is a weekly podcast hosted by Joe Colantonio, which geeks out on all things software test automation. TestTalks covers news found in the testing space, reviews books about automation and speaks with some of the thought leaders in the test automation field. Well aim to interview some of todays most successful and inspiring software engineers and test automation thought leaders.During the interviews, the spotlighted engineer will tell us about his or her testing experience, sharing their successes and failures as well as which testing techniques are working for them right now. Well all learn more about testing through these talks hence the name TestTalks.
Episodes
-
AI Testing Is Breaking Your Pipeline. Fix Quality Before It's Too Late with Eric Minick
15/04/2026 Duration: 29minAI coding tools are helping teams move faster than ever, but there's a hidden cost. In this episode, we break down new insights from a DevOps industry report revealing a growing "velocity paradox": teams are shipping more code, but experiencing more failures, rollbacks, and burnout. You'll discover why AI adoption is heavily skewed toward coding, but not testing, pipelines, or observability, and how that imbalance is creating fragile systems that break under pressure. More importantly, you'll learn what high-performing teams are doing differently to maintain quality while scaling speed. What You'll Discover: ✔️ Why AI is increasing deployment failures (and how to stop it) ✔️ The "velocity vs quality" trap hurting modern DevOps teams ✔️ How to reduce flaky tests and pipeline instability ✔️ Why observability and feature flags are now critical, not optional ✔️ Practical ways to improve your CI/CD pipeline for AI-driven development ✔️ The role of QA engineers in the age of AI (and why it's growing, not shrinking)
-
Scaling Quality Engineering: How to Deliver Faster Across Global Teams with Sunita McCoy
07/04/2026 Duration: 33minAI is changing how we build and test software, but most teams are struggling to turn that promise into real results. In this episode, we break down what it actually takes to scale quality engineering across global teams without creating bottlenecks, burnout, or broken processes. You'll learn: why most test automation and transformation initiatives fail how to separate AI hype from reality what high-performing teams are doing differently to ship faster with confidence Today's expert, Sunita McCoy, a Global Engineering Leader and Transformation Specialist, shares practical insights from leading large-scale engineering transformations, including: how to build a culture that supports AI adoption why "quality as a phase" is dead how to shift toward treating quality as a product If you're a QA leader, automation engineer, or DevOps professional trying to improve reliability, reduce risk, and future-proof your skills in the age of AI, this episode gives you a clear path forward.
-
Mobile Test Automation is Broken. Here's How QApilot Fixes It with Aditya Challa
31/03/2026 Duration: 37minMobile test automation is still one of the biggest bottlenecks in modern software delivery. In this interview, QApilot's Co-founder Aditya Challa explains why most AI testing approaches fail and how to fix them. Learn more about QApilot: https://links.testguild.com/flutterqa If your mobile tests are flaky, slow, or hard to trust, you're not alone. Most teams are trying to apply LLM-based AI to problems that actually require deterministic reliability—and that's where things break down. In this video, you'll learn: Why mobile test automation breaks at scale The real issue with "99% accurate" AI in testing LLMs vs deterministic AI (and why it matters for mobile apps) How flaky tests destroy confidence in your pipeline How QApilot approaches mobile testing differently What reliable, scalable mobile automation should look like What this means for you: Fewer false positives, faster releases, and mobile tests you can actually trust. 00:00 Why Mobile Test Automation Is Still Broken 01:10 QApilot Overview 01:51 Why
-
AI Testing: How Solo Testers Stay Confident in Releases with Christine Pinto
25/03/2026 Duration: 44minAre you the only tester on your team—and expected to ensure quality across everything? In this episode, we break down the growing challenge of solo QA testing in the age of AI-driven development—where code is generated faster than ever, but confidence hasn't caught up. Christine Pinto shares real-world insights from her experience as a solo tester and now as a founder building tools designed to help testers reduce risk, collaborate better, and make smarter release decisions. You'll learn: Why "all tests passing" doesn't mean your product is safe The hidden risks of AI-generated code and test automation How to shift from test coverage to risk-based testing Practical ways solo testers can avoid burnout and isolation How to bring collaboration back into QA—even if you're the only tester Why better requirements still matter more than better AI
-
AI Testing from Production Logs: Generate Smarter Regression Tests with Tanvi Mittal
17/03/2026 Duration: 27minWhat if your production logs could automatically generate new test cases? In this episode, Joe Colantonio sits down with Tanvi Mittal to break down how AI-powered log mining is changing the way teams approach software testing, quality engineering, and DevOps. Most teams ignore production logs or use them only for debugging. But those logs contain real user behavior, real failures, and real edge cases—the exact scenarios your test suite is probably missing.
-
AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman
10/03/2026 Duration: 43minHow do you ensure software quality when the system you're testing doesn't give the same output twice? Go to https://links.testguild.com/inflectra and start your free 30-day trial, no credit card, no contract required. That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades. In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now. We cover: Why AI-generated code is raising the stakes for QA teams while budgets stay flat The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an
-
Test Automation Tools That Scale: From Zero to 1.6M Users with Sanjay Kumar
03/03/2026 Duration: 29minWhat does it really take to build a test automation tool that millions of testers rely on, without venture capital, paid ads, or a massive team? In this episode, we explore how SelectorsHub grew into one of the most widely used productivity tools in software testing, reaching over 1.6 million testers worldwide. You'll discover: How to build test automation tools that solve real QA pain Why community-driven development beats chasing funding How to prioritize features when you have thousands of users Whether AI testing tools will replace selector-based automation How to choose between Playwright vs Selenium using automation analysis What founders and QA leaders can learn from scaling without VC If you're an automation engineer, QA lead, DevOps professional, or tool builder looking to scale smarter, this episode delivers real-world insight without hype. Whether you're building frameworks internally or launching your own automation product, you'll walk away with a clearer strategy for solving problems testers a
-
AI Test Automation: Ship Twice as Fast with 10x Coverage with Karim Jouini
24/02/2026 Duration: 42minAI test automation is evolving fast — but most tools still generate brittle code that breaks with every UI change. See it for yourself now: https://links.testguild.com/Thunders In this episode of the TestGuild Podcast, Joe Colantonio sits down with Karim Jouini, founder of Thunders, to explore a radically different approach to AI testing: executing test automation in plain English without generating Selenium or Playwright code. Instead of "auto-healing selectors," Thunders interprets natural language directly — allowing teams to: Ship twice as fast Achieve 10x test coverage with the same resources Reduce regression cycles from weeks to days Eliminate massive automation maintenance overhead Karim shares real-world case studies, including: A European bank that reduced a 3-year core banking upgrade testing effort to 4 months A SaaS company that transitioned from a traditional QA team to AI-assisted product-led testing We also discuss: Whether AI test agents replace QA roles How QA managers must shift from i
-
Performance Testing with AI w/ Akash Thakur
17/02/2026 Duration: 26minIs traditional performance testing becoming obsolete? In this episode, performance engineering expert Akash Thakur shares why AI is fundamentally transforming load testing, scripting, observability, and shift-left strategies. With 17 years of real-world enterprise experience, Akash explains how AI-augmented tools are already reducing scripting time by 30%, improving analysis speed, and helping teams move from reactive performance testing to predictive intelligence. You'll learn: How AI is accelerating performance scripting and analysis Why shift-left performance testing is finally becoming realistic The role of structured data in predictive QA models How to test AI applications (LLMs, GPUs, inference throughput) differently than traditional web apps What the future role of performance engineers looks like — architect, not script writer If you're a performance tester, SRE, QA leader, or DevOps engineer wondering how AI will impact your role — this episode gives you practical, actionable insights you can appl
-
Spec2TestAI: Stop Defects Before They Reach Production with Missy Trumpler
27/01/2026 Duration: 34minMost teams find defects after the damage is done — during regression, late-stage testing, or production incidents. That's expensive, stressful, and completely avoidable. Try Spec2Test AI now: https://testguild.me/spec2testdemo In this episode, Joe Colantonio sits down with Missy Trumpler, CEO of AgileAILabs, to explore how Spec2TestAI helps teams prevent defects before code ships by applying AI directly to requirements. You'll learn: Why traditional test automation still misses critical risk How predictive, requirements-based AI testing works in practice What "shift-left" actually looks like beyond the buzzword How to reduce escaped defects without writing more tests Why secure, explainable AI matters for QA and enterprise teams This conversation is especially valuable for software testers, automation engineers, and QA leaders who want earlier visibility into risk, faster feedback, and higher confidence releases. Don't miss Automation Guild 2026 - Register Now: https://testguild.me/podag26
-
Locust Performance Testing with AI and Observability with Lars Holmberg
13/01/2026 Duration: 30minPerformance testing often fails for one simple reason: teams can't see where the slowdown actually happens. In this episode, we explore Locust load testing and why Python-based performance testing is becoming the go-to choice for modern DevOps, QA, and SRE teams. You'll learn how Locust enables highly realistic user behavior, massive concurrency, and distributed load testing — without the overhead of traditional enterprise tools. We also dive into: Why Python works so well for AI-assisted load testing How Locust fits naturally into CI/CD and GitHub Actions The real difference between load testing vs performance testing How observability and end-to-end tracing eliminate guesswork Common performance testing mistakes even experienced teams make Whether you're a software tester, automation engineer, or QA leader looking to shift-left performance testing, this conversation will help you design smarter tests and catch scalability issues before your users do.
-
Top 8 Automation Testing Trends for 2026 with Joe Colantonio
06/01/2026 Duration: 12minAI testing is everywhere — but clarity isn't. In this episode, Joe Colantonio breaks down the real test automation trends for 2026, based on data from 40,000+ testers, 510 live Q&A questions, and 50+ interviews with industry leaders. This isn't vendor hype or futuristic speculation. It's what working testers are actually worried about — and what they're doing next. You'll learn: Why 72.8% of testers prioritize AI, yet don't trust it alone The real reason AI testing feels harder instead of easier How integration chaos is blocking automation success Why "AI auditor" and "quality strategist" are emerging career paths What agentic AI, MCPs, and vibe testing really mean in practice How compliance, accessibility, and security will redefine QA in 2026 If you're a tester, automation engineer, or QA leader trying to stay relevant — this episode gives you the signal through the noise, and a clear path forward. If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays ou
-
Automation Testing Podcast 2026: New Schedule, Events, Discounts with Joe Colantonio
28/12/2025 Duration: 02minThis is a special end-of-year episode of the Automation Testing Podcast. With family in town and a busy holiday season, Joe didn't want to skip a week without checking in and saying thank you to the TestGuild community. In this short episode, Joe shares: A huge milestone as the podcast approaches its 13-year anniversary Why the Automation Testing Podcast is moving from Sundays to Tuesdays starting in 2026 How loyal listeners can still get $100 off a full 5-day Automation Guild 2026 pass A sneak peek at TestGuild IRL — live, in-person events coming next year Gratitude for the listeners, YouTube community, and sponsors who make TestGuild possible If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays out what's coming — and how to stay connected. Discount code: 100GUILDCOIN (https://testguild.me/podag26) Questions or ideas? Email Joe directly at joe@testguild.com As always — test everything, and keep the good.
-
AI Testing LLMs & RAG: What Testers Must Validate with Imran Ali
21/12/2025 Duration: 32minAI is transforming how software is built, but testing AI systems requires an entirely new mindset. Don't miss AutomationGuild 2026 - Register Now: https://testguild.me/podag26 Use code TestGuildPod20 to get 20% off your ticket. In this episode, Joe Colantonio sits down with Imran Ali to break down what AI testing really looks like when you're dealing with LLMs, RAG pipelines, and autonomous QA workflows. You'll learn: Why traditional pass/fail testing breaks down with LLMs How to test non-deterministic AI outputs for consistency and accuracy Practical techniques for detecting hallucinations, grounding issues, and prompt injection risks How RAG systems change the way testers validate AI-powered applications Where AI delivers quick wins today—and where human validation still matters This conversation goes beyond hype and gets into real-world AI testing strategies QA teams are using right now to keep up with AI-generated code, faster release cycles, and DevOps velocity. If you're a tester, automation engineer,
-
AI Codebase Discovery for Testers with Ben Fellows
14/12/2025 Duration: 44minWhat if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
-
Gatling Studio: Start Performance Testing in Minutes (No Expertise Required) with Shaun Brown and Stephane Landelle
07/12/2025 Duration: 40minPerformance testing has traditionally been one of the hardest parts of QA,slow onboarding, complex scripting, difficult debugging, and too many late-stage surprises. Try Gatling Studio for yourself now: https://links.testguild.com/gatling In this episode, Joe sits down with Stéphane Landelle, creator of Gatling, and Shaun Brown to explore how Gatling is reinventing the load-testing experience. You'll hear how Gatling evolved from a developer-first framework into a far more accessible platform that supports Java, Kotlin, JavaScript/TypeScript, and AI-assisted creation. We break down the thinking behind Gatling Studio, a new companion tool designed to make recording, filtering, correlating, and debugging performance tests dramatically easier. Whether you're a developer, SDET, or automation engineer, you'll learn: How to onboard quickly into performance testing—even without deep expertise Why Gatling Studio offers a smoother way to record traffic and craft tests Where AI is already improving load test authoring
-
AI-Driven Manual Regression: Test Only What Truly Matters With Wilhelm Haaker and Daniel Garay
01/12/2025 Duration: 39minManual regression testing isn't going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles. See how AI can help your manual testing now: https://testguild.me/parasoftai In this episode, we explore how Parasoft's Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity. Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing. What You'll Learn: Why manual regression remains a huge bottleneck in modern DevOps How Test Impact Analysis reveals the exact tests affected by code changes How code coverage + impact analysis reduce risk without expanding the test suite Ways
-
Top Automation Guild Survey Insights for 2026 with Joe Colantonio
24/11/2025 Duration: 08minAutomation Guild turns 10 this year, and the 2026 survey revealed some of the strongest trends and signals the testing community has ever shared. Register now: https://testgld.link/ag26reg In this episode, Joe breaks down the most important insights shaping Automation Guild 2026 and what they mean for testers, automation engineers, and QA leaders. You'll hear why AI-powered testing is dominating every category, why Playwright has officially become the tool testers want most, the challenges that continue to follow teams year after year, and how testers are navigating shrinking teams, faster releases, and rising expectations. This episode gives you a clear, data-driven snapshot of why Automation Guild 2026 matters — and how this year's event is designed to help you stay relevant, sharpen your skills, and tackle the problems that keep slowing down teams. Perfect for anyone considering joining the Guild, planning their 2026 automation strategy, or just trying to make sense of the rapid changes happening in testin
-
Testing AI Vibe Coding: Stop Vulnerabilities Early with Sarit Tager
16/11/2025 Duration: 32minAI is accelerating software delivery, but it's also introducing new security risks that most developers and automation engineers never see coming. In this episode, we explore how AI-generated code can embed vulnerabilities by default, how "vibe coding" is reshaping developer workflows, and what teams must do to secure their pipelines before bad code reaches production. You'll learn how to prompt more securely, how guardrails can stop vulnerabilities at generation time, how to prioritize real risks instead of false positives, and how AI can be used to protect your applications just as effectively as attackers use it to exploit them. Whether you're using Cursor, Copilot, Playwright MCP, or any AI tool in your automation workflow, this conversation gives you a clear roadmap for staying ahead of AI-driven vulnerabilities — without slowing down delivery. Featuring Sarit Tager, VP of Product for Application Security at Palo Alto Networks, who reveals real-world insights on securing AI-generated code, understanding
-
4 Free TestGuild Tools Every Tester Should Be Using with Joe Colantonio
09/11/2025 Duration: 17minIn this solo episode, Joe Colantonio shares four powerful free TestGuild tools designed to help testers, automation engineers, and QA leaders work smarter. Discover how to instantly find the right testing tool for your team, assess automation risk, check your site's accessibility, and benchmark your automation maturity — all in one session. Whether you're looking to improve test coverage, adopt better practices, or simply save time, these tools were built with you in mind. What You'll Learn: – How to choose the right test automation tool fast – How to identify and reduce testing risk – How to check your site's accessibility compliance – How to assess your team's automation maturity level Try the tools free: Tool Matcher: https://testgld.link/toolmatcher Accessibility Scanner: https://testgld.link/scanner Risk Calc: https://testgld.link/riskcalc Automation Readiness Quiz: https://testgld.link/scorequiz ️ Join us for the 10th Annual Automation Guild Conference: https://testgld.link/IrHaNIVX