Most software engineering graduates enter the workforce having written thousands of lines of code but fewer than a dozen test cases. They can build a REST API from scratch but cannot articulate a test strategy for it. This is not their fault — it's a curriculum problem. And after teaching quality assurance at UPC in Lima for several years while simultaneously leading QA teams in production environments, I've developed strong opinions about how to close this gap.
This article shares what I've learned about making QA education genuinely useful — not just academically correct, but professionally relevant from the first day a graduate joins a team.
The Gap Between Academic QA and Industry Reality
Traditional computer science curricula treat testing as an afterthought. A typical software engineering course might dedicate two lectures to "verification and validation," cover the V-model, mention black-box vs. white-box techniques, and move on. Students graduate knowing the vocabulary of testing without understanding its practice.
Meanwhile, the industry needs engineers who can design a test pyramid for a microservices architecture, write reliable E2E tests in Playwright, configure a CI/CD pipeline with quality gates, and make risk-based decisions about what to automate and what to test manually. The disconnect is enormous.
The root cause: most QA curricula are designed by academics who haven't worked in software delivery teams. They teach testing theory well but miss the operational reality — the flaky tests, the environment configuration nightmares, the political dynamics of convincing a product owner that test infrastructure deserves investment.
Curriculum Design: What Students Actually Need
When I redesigned the QA module at UPC, I started with a question I ask every industry professional I work with: "What do you wish your junior QA engineers already knew on day one?" The answers were remarkably consistent:
- Test strategy thinking — the ability to look at a feature and decide which tests to write, at which level, and why.
- Automation fundamentals — not just tool syntax, but understanding the Page Object pattern, test data management, and what makes a test reliable vs. flaky.
- API testing — most modern applications are API-first, yet graduates rarely know how to use Postman, let alone write contract tests.
- CI/CD awareness — understanding how tests fit into a delivery pipeline, what a quality gate is, and why test execution time matters.
- Communication — writing a clear bug report, defending a quality decision to stakeholders, explaining technical risk in business terms.
Notice what's absent from this list: nobody asked for students who memorized ISTQB terminology. They want engineers who can think about quality, not just define it.
Hands-On From Day One
My courses begin with tools, not theory. In the first session, students install Playwright, write a test that navigates to a real website and asserts something visible, and run it. By the end of week one, they've seen a green test pass and a red test fail. They've experienced the feedback loop.
Theory follows practice, not the other way around. After students have written tests that break due to timing issues, I introduce the concept of synchronization strategies. After they've struggled with duplicated setup code, I introduce fixtures and the Page Object Model. The theory lands because they've already felt the pain it addresses.
This approach mirrors how professionals actually learn. Nobody reads the ISTQB Foundation Level syllabus cover to cover before writing their first test. They learn by doing, and they seek theory when they hit a wall. My classroom replicates that cycle intentionally.
Teaching Strategy, Not Just Tools
Tools change. Selenium dominated for a decade; now Playwright and Cypress are the standard. Teaching a specific tool's API has a half-life of maybe three years. Teaching test strategy lasts an entire career.
I dedicate significant time to exercises where students receive a product specification and must produce a test strategy document — not code, just decisions. What testing levels? What's the automation vs. manual split? Where are the highest-risk areas? What environments are needed? What are the exit criteria?
These exercises force students to think like QA leads, not just test writers. They quickly discover that the hardest part of QA isn't writing the test — it's deciding which test to write and when it's good enough to stop testing.
ISTQB as Foundation, Not Ceiling
The ISTQB Foundation Level certification provides excellent vocabulary and a shared mental model for testing concepts. I use its structure as the backbone of my curriculum. Equivalence partitioning, boundary value analysis, decision tables, state transition testing — these techniques are timeless and universally applicable.
But I'm explicit with students: ISTQB certification alone does not make you a QA engineer. It makes you someone who understands QA terminology. The certification doesn't teach you how to debug a flaky Playwright test, how to design test data for a multi-tenant system, or how to negotiate test scope with a product manager under delivery pressure.
I encourage students to pursue the certification as a structured learning path, but I pair every theoretical concept with a practical exercise that grounds it in real tooling and real scenarios.
Student Projects: Testing Real Software
The most impactful part of the course is the final project. Students don't test toy applications — they test real open-source projects. Past cohorts have written test suites for projects like TodoMVC, contributed bug reports to open-source issue trackers, and even submitted pull requests with test improvements.
This achieves several things simultaneously. Students interact with codebases they didn't write, which mirrors industry reality. They practice reading documentation and making testing decisions with incomplete information. They experience the satisfaction of contributing to a real project, which builds professional confidence. And they build a portfolio artifact they can show in job interviews.
One former student landed her first QA role partly because she could show an interviewer a Playwright test suite she'd written for an open-source project, complete with CI integration via GitHub Actions. That's the kind of outcome that validates the approach.
Mentoring the Next Generation
Teaching is mentoring at scale. Every semester, I see students who arrive convinced that QA is "less technical" than development. By the end of the course, many of them recognize that quality engineering requires deep technical skill, strategic thinking, and communication ability in equal measure.
The students who thrive are not necessarily the strongest programmers. They're the ones who develop a quality mindset — the habit of asking "what could go wrong?" before asking "how do I build this?" That mindset is teachable, but it requires deliberate practice, not just lectures.
I also connect top students with professionals in my network for internships and first roles. The QA community in Latin America is growing fast, and the demand for well-prepared junior engineers far exceeds supply. Universities that invest in practical QA education will produce graduates who are immediately valuable to industry.
What Industry Can Learn From Academia
The knowledge transfer is not one-directional. Working in academia has sharpened my industry practice in unexpected ways. Teaching forces you to articulate tacit knowledge — the intuitions and heuristics that experienced QA leads apply unconsciously. When a student asks "why do you start testing there?" you need an answer more rigorous than "because it feels risky."
Academic rigor also provides frameworks for evaluating testing approaches systematically. The industry often adopts tools and practices based on conference talks and blog posts. Academia demands evidence. Combining both perspectives — practical experience with analytical rigor — produces better QA leaders.
The best QA education doesn't choose between theory and practice. It teaches theory through practice, and uses practice to challenge theory.
If you're an educator designing a QA curriculum, start with industry needs and work backward. If you're a QA leader, consider guest lecturing at a local university — the students need your perspective, and you'll sharpen your own thinking in the process. The future quality of our software depends on how well we prepare the next generation to think critically about it.
Comments
0 commentsAll comments are moderated and will appear after review.