エピソード

  • Episode 10: Failing Fast and Fast Feedback
    2024/12/19

    In this episode, we explore the concept of the 10-Minute Test Plan and why “failing fast” is beneficial for agile development. We’ll discuss the ongoing importance of manual testers in the software development lifecycle and delve into the unique culture of “Googliness” at Google. We also discuss the benefits of Story Mapping.


    We’ll emphasize the importance of empowering your team to make decisions and why allowing them to make mistakes can lead to growth and innovation. We also cover the significance of measuring what matters and adopting a holistic approach to software engineering.


    Additionally, we highlight Google’s Auto-Test system and the practice of dogfooding, as well as the value of fast feedback loops. Finally, we examine the key characteristics that define a good leader in the tech industry.


    Join us for a deep dive into effective testing strategies and leadership principles.


    References:

    • The Weasel Speaks Blog
    • How Google Tests Software
    • Holistic Testing: Weave Quality into Your Product
    続きを読む 一部表示
    16 分
  • Episode 9: Holistic Testing and Feeding the Dogs
    2024/12/10

    In this episode, we delve into the principles of holistic testing and the whole-team approach, emphasizing the importance of creating an environment where every team member feels empowered and takes ownership of their work. We discuss strategies for accurately estimating project timelines and explore how Google employs “dogfooding” — testing products internally before releasing them to external customers.


    We also highlight the significance of fostering a continuous-learning mindset within your team. Finally, we explore how the “Goal-Question-Metric” approach can be used to identify the metrics that genuinely matter for your projects.


    Join us to learn how to build a culture of empowerment, continuous improvement, and effective measurement in your software development and testing processes.


    References:

    • The Weasel Speaks Blog
    • The Feiner Points of Leadership: The 50 Basic Laws that Will Make People Want to Perform Better for You
    • Leadership on the Line: Staying Alive through the Dangers of Leading
    • Holistic Testing: Weave Quality into Your Product
    • How Google Tests Software
    続きを読む 一部表示
    14 分
  • Episode 8: Prompt and Circumstance - Prompt Engineering for Testers
    2024/12/02

    In this episode, we delve into the world of Prompt Engineering, specifically tailored for software testers.


    Discover how to extract the most precise information from Large Language Models (LLMs) using three distinctive frameworks:


    • RACE: Role, Action, Context, Expectation

    • COAST: Context, Objective, Actions, Scenario, Task

    • APE: Action, Purpose, Expectation


    Learn why it’s crucial to define the AI’s role and provide ample context to ensure accurate responses. Find out how to refine your queries if you’re unsatisfied with the answers, and explore techniques like asking for more details or inquiring about the AI’s confidence in its response.

    We also explore innovative ways to enhance your testing processes, such as uploading UI screenshots to generate test cases or even having the AI produce test cases in CSV format for seamless integration with your test case management system.

    Additionally, we discuss the intriguing decline in Stack Overflow traffic due to the rise of AI, and the potential long-term impact on AI quality as these models rely on platforms like Stack Overflow for training data.

    Tune in to gain valuable insights and elevate your software testing with the power of AI!

    More Resources:

    • https://www.linkedin.com/pulse/prompting-testers-jason-arbon/
    • https://testsigma.com/blog/prompt-engineering-for-testers/
    • https://www.linkedin.com/feed/update/urn:li:activity:7256493948087476224/
    続きを読む 一部表示
    17 分
  • Episode 7: More Agile Myth Busting
    2024/11/25

    In this episode of Testing and Management Insights, we debunk two common myths of agile testing: the belief that agile means no documentation and that testers aren't needed.

    We explore the concept of shifting left and the importance of integrating quality into every step of development. We delve into the Agile Testing Quadrants and discuss the different types of tests that support the engineering team versus those that support the customer.

    This episode emphasizes the importance of continual collaboration with your customer and introduces practical tools like Story Checklists for capturing easy-to-miss requirements and Mind Maps for brainstorming test scenarios and seeing the big picture. We also discuss the Power of Three rule, the balance of how much to automate, and the test automation pyramid.


    More Resources:

    • Agile Testing: A Practical Guide for Testers and Agile Teams
    • More Agile Testing: Learning Journeys for the Whole Team
    • Holistic Testing: Weave Quality into Your Product
    続きを読む 一部表示
    10 分
  • Episode 6: Testing Tales and Leadership Lessons
    2024/11/18

    In this episode, we delve into the essential strategies for cultivating an exceptional software team, emphasizing the importance of building trust and fostering a collaborative environment. We explore key motivators that drive team performance: mastery, autonomy, and purpose. The discussion highlights the shift in leadership from micromanaging to coaching, empowering team members to take ownership and make informed decisions.


    We introduce the SBI model (Situation, Behavior, Impact) as a structured approach to giving constructive feedback, enhancing communication, and promoting continuous improvement. The episode also covers vital concepts such as the Testing Pyramid and Agile Testing Quadrants, which are crucial for efficient software development and testing processes.


    Additionally, we examine the practice of Bug Bashes as a proactive method to identify and resolve issues collaboratively. Finally, we provide seven essential rules for conducting effective meetings, ensuring they are productive and time-efficient.


    Tune in to gain valuable insights and practical tips for leading your software team to success.


    More Resources:

    • How Google Tests Software
    • How We Test at Microsoft
    • Agile Testing: A Practical Guide for Testers and Agile Teams
    • Building Great Software Engineering Teams: Recruiting, Hiring, and Managing Your Team from Startup to Success
    • Management 3.0: Leading Agile Developers, Developing Agile Leaders
    続きを読む 一部表示
    10 分
  • Episode 5: Exploring DORA 2024
    2024/11/11

    In this episode of TMI we dive deep into the findings of the 10th annual DORA 2024 Accelerate State of DevOps Report, which surveyed over 40,000 tech professionals. Join us as we explore the transformative impact of AI integration in the tech industry:

    • Discover how 81% of organizations are prioritizing AI, with 76% of developers incorporating it into their daily workflows.
    • Delve into the paradox of AI-driven productivity: while individual engineers report significant gains, organizations face challenges with slower and less stable software delivery.
    • Learn how AI adoption enhances code quality, improves documentation, and expedites code reviews and approvals.
    • Gain insights into strategies for leaders to support their teams in successfully integrating AI.

    We also revisit DORA’s four key metrics that are essential, yet just part of the larger picture:

    • Deployment Frequency: The cadence of successful production releases.
    • Lead Time for Changes: The duration from commit to production.
    • Change Failure Rate: The percentage of deployments causing production failures.
    • Time to Restore Service: The recovery time from production failures.

    Explore the necessity of dashboards that bridge technical and business metrics, and understand the critical role of human elements such as well-being and culture in DevOps success.


    Finally, we reflect on a decade-long core concept from DORA: the significance of “user centricity.” Learn why a user-centered approach leads to high product quality, regardless of your team’s size or velocity.


    Tune in for an insightful discussion on aligning technical excellence with business outcomes and user needs


    More Resources:

    • https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report
    • https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance
    続きを読む 一部表示
    18 分
  • Episode 4: Bugging Out
    2024/11/05

    Explore the world of whole-team testing and discover how Test Driven Development (TDD) fosters smarter design and more testable code. Delve into the crucial role of the Product Owner as the customer’s advocate. Gain insights into Google’s powerful internal tools that execute millions of regression tests and learn why manual testing remains vital. Embark on exploratory testing tours like the “All Nighter Tour” and the “Garbage Collector Tour” . Discover Google’s bug database, Buganizer, which analyzes trends and predicts future bug locations. Finally, uncover how Microsoft approaches testing for the cloud and explore James Whitaker’s concept for a “Testipedia.”


    More Resources:

    • Exploratory Software Testing
    • A Practitioners Guide to Software Test Design
    • How Google Tests Software
    • How We Test at Microsoft
    • Agile Testing: A Practical Guide for Testers and Agile Teams
    • The A Word
    続きを読む 一部表示
    16 分
  • Episode 3: Testing the Waters of AI
    2024/10/29

    Dive into the world of AI-driven testing in our latest episode! Discover how Meta is pioneering the use of AI to autonomously test other AIs, eliminating the need for human intervention. We explore the challenges faced by human testers in keeping up with the rapid pace of AI-generated code and how AI-first testing can revolutionize the process. Learn about AI-augmented testing, where AI acts as a co-pilot, enhancing the capabilities of human testers. We’ll also delve into the art of Prompt Engineering, crafting precise prompts to maximize AI performance. Finally, we discuss the irreplaceable role of human critical thinking in ensuring AI accuracy. Tune in for a thought-provoking conversation!


    More Resources:

    • What Are We Thinking — in the Age of AI? with Michael Bolton (a PNSQC Live Blog)
    • When Humans Tested Software (AI First Testing) with Jason Arbon (a PNSQC Live Blog)
    • AI-Augmented Testing: How Generative AI and Prompt Engineering Turn Testers into Superheroes, Not Replace Them with Jonathon Wright’s (a PNSQC Live Blog)
    • Meta releases AI model that can check other AI models' work
    続きを読む 一部表示
    11 分