1. Should We Use TDD?
Context factors that say YES:
- High-risk/critical software
- Complex business logic
- Team experienced with TDD
- Code will live for years
Context factors that say NO:
- Prototyping/exploring
- Throwaway code/data scripts
- Team has zero TDD experience and no time to learn
- UI-heavy work with unclear requirements
Context factors that say SOME:
- Mix of complex and simple code
- Team learning TDD gradually
- Critical modules only
2. Should We Provide Estimates?
Context factors that say YES:
- Business needs budget planning
- Coordinating with external dependencies
- Customer commitments required
- Leadership demands predictability
Context factors that say NO:
- Pure discovery/research mode
- Estimates are used as commitments and create pressure
- Work is too novel to estimate
- Focus on continuous flow over project milestones
Context factors that say “DATES NOT ESTIMATES”:
- Stakeholders need planning visibility
- Estimates are contentious or misused
- You can commit to “done by X” without detailed breakdown
3. Should We Require All Coding to Be Done as Pair Programming?
Context factors that say YES:
- Team is co-located and communication is critical
- Complex or safety-critical code
- Team is experienced with pair programming
- Desire for shared code ownership and knowledge transfer
Context factors that say NO:
- Remote or distributed teams without good collaboration tools
- Simple or routine coding tasks
- Tight deadlines with no time for pairing
- Team unfamiliar with pair programming
Context factors that say SOME:
- Pair programming for complex or critical modules only
- Pairing during onboarding or knowledge transfer
4. Should We Require All Coding to Be Done Using TDD?
Context factors that say YES:
- High-risk/critical software
- Complex business logic
- Team experienced with TDD
- Code will live for years
Context factors that say NO:
- Prototyping or exploratory coding
- Throwaway scripts or one-off tasks
- Team has no TDD experience and no time to learn
- UI-heavy work with unclear requirements
Context factors that say SOME:
- Mix of complex and simple code
- Team learning TDD gradually
- Critical modules only
5. Should We Require All Code Commits to Have a Peer Review/Pull Request Before Incorporating Them Into Our Feature Branch?
Context factors that say YES:
- Need for high code quality and consistency
- Distributed or large teams
- Regulatory or compliance requirements
- Desire for knowledge sharing and defect prevention
Context factors that say NO:
- Small co-located team with good communication
- Prototyping or throwaway code
- Urgent hotfixes or emergency patches
Context factors that say SOME:
- Peer review for critical or complex changes only
- Lightweight reviews for minor changes
6. Should We Have Retrospectives Every Two Weeks?
Context factors that say YES:
- Agile teams practicing Scrum or similar
- Desire for continuous improvement
- Stable team composition
Context factors that say NO:
- Teams in pure Kanban or continuous flow without fixed iterations
- Very small teams with informal feedback loops
- Teams under extreme time pressure
Context factors that say SOME:
- Retrospectives monthly or after major milestones
- Shorter or less formal retrospectives
7. Should We Automate Our Build?
Context factors that say YES:
- Frequent code changes
- Multiple developers or teams
- Need for repeatable builds
- Desire for continuous integration
Context factors that say NO:
- Very small projects or solo developers
- Prototyping or throwaway code
Context factors that say SOME:
- Partial automation for critical components
8. Should We Build on Every Commit?
Context factors that say YES:
- Continuous integration culture
- Fast build times
- Automated tests available
Context factors that say NO:
- Long or resource-intensive builds
- Legacy systems without CI infrastructure
Context factors that say SOME:
- Build on pull requests or nightly builds
9. Should We Deploy on Every Commit?
Context factors that say YES:
- Mature CI/CD pipelines
- Fast, reliable testing
- Low-risk or feature-flagged deployments
Context factors that say NO:
- Manual approval processes
- Complex deployment environments
- Regulatory constraints
Context factors that say SOME:
- Deploy to staging on every commit, production less frequently
10. Should We Run Regression Tests on Every Commit?
Context factors that say YES:
- High test coverage
- Fast, reliable tests
- Critical systems requiring stability
Context factors that say NO:
- Slow or flaky tests
- Lack of automation
Context factors that say SOME:
- Run smoke tests on every commit, full regression nightly
11. Should We Use Story Points or Time-Based Estimates?
Context factors that say STORY POINTS:
- Agile teams practicing Scrum or Kanban
- Focus on relative complexity over exact time
- Desire to measure team velocity
Context factors that say TIME-BASED:
- Stakeholders require calendar-based plans
- Project management requires resource allocation
- Teams new to estimation
Context factors that say BOTH OR EITHER:
- Hybrid teams with mixed needs
- Transitioning from time to story points
12. Should we track velocity as a team metric?
Context factors that say YES:
- Agile teams using story points
- Desire to measure and improve team performance
- Stable team composition
Context factors that say NO:
- Teams new to Agile or estimation
- Velocity used as a performance pressure tool
- High team turnover or changing team members
Context factors that say SOME:
- Use velocity trends cautiously
- Combine with other qualitative metrics
13. Should we have daily standup meetings?
Context factors that say YES:
- Co-located or well-connected remote teams
- Need for daily synchronization and quick issue resolution
- Agile or Scrum teams
Context factors that say NO:
- Teams with asynchronous workflows
- Distributed teams in multiple time zones
- Teams with low interdependency
Context factors that say SOME:
- Shorter or less frequent standups
- Use asynchronous updates when needed
14. Should we limit work in progress (WIP)?
Context factors that say YES:
- Teams practicing Kanban or Lean
- Desire to improve flow and reduce multitasking
- High context-switching costs
Context factors that say NO:
- Teams with low task interdependency
- Early-stage projects with exploratory work
Context factors that say SOME:
- Limit WIP on critical workflows only
- Gradual introduction of WIP limits
15. Should we use feature flags for all new functionality?
Context factors that say YES:
- Continuous delivery or deployment culture
- Need for controlled feature rollout
- Desire to reduce deployment risk
Context factors that say NO:
- Simple or small projects
- Lack of infrastructure or tooling
Context factors that say SOME:
- Use feature flags for major or risky features only
16. Should we require code coverage metrics to meet a minimum threshold?
Context factors that say YES:
- Safety-critical or regulated software
- Desire for high test quality
- Teams practicing TDD or automated testing
Context factors that say NO:
- Prototyping or exploratory code
- Legacy codebases without tests
Context factors that say SOME:
- Minimum coverage for new or critical code only
17. Should we enforce coding standards through automated linting?
Context factors that say YES:
- Large or distributed teams
- Desire for consistent code style
- Automated CI/CD pipelines
Context factors that say NO:
- Small teams with informal style agreements
- Early-stage prototypes
Context factors that say SOME:
- Linting for critical modules or new code
18. Should we have a separate QA team or embed testing in development teams?
Context factors that say SEPARATE QA TEAM:
- Large organizations with specialized roles
- Regulatory or compliance requirements
- Complex testing needs
Context factors that say EMBEDDED TESTING:
- Agile or DevOps teams
- Desire for faster feedback
- Cross-functional team culture
Context factors that say HYBRID:
- Combination of embedded testing and specialized QA for complex areas
19. Should we use trunk-based development or feature branches?
Context factors that say TRUNK-BASED:
- Continuous integration and delivery culture
- Small to medium teams
- Desire for fast feedback and integration
Context factors that say FEATURE BRANCHES:
- Large teams with complex features
- Need for isolated development and review
Context factors that say HYBRID:
- Use feature branches for major features, trunk-based for small fixes
20. Should we require architectural review for major changes?
Context factors that say YES:
- Complex or large-scale systems
- High-impact or cross-cutting changes
- Regulatory or compliance needs
Context factors that say NO:
- Small teams with informal architecture
- Minor or isolated changes
Context factors that say SOME:
- Architectural review for critical or high-risk changes only
21. Should we have a definition of done for all work items?
Context factors that say YES:
- Agile teams practicing Scrum or Kanban
- Desire for clear quality and completeness criteria
- Cross-functional teams
Context factors that say NO:
- Informal or ad-hoc workflows
- Very small teams
Context factors that say SOME:
- Definition of done for critical or complex work only
22. Should we use mob programming for complex problems?
Context factors that say YES:
- Complex or high-risk problems
- Team experienced with mob programming
- Desire for collective code ownership and knowledge sharing
Context factors that say NO:
- Simple or routine tasks
- Teams unfamiliar with mob programming
- Time pressure or tight deadlines
Context factors that say SOME:
- Use mob programming for critical modules or during onboarding
23. Should we conduct post-mortem analyses after incidents?
Context factors that say YES:
- Serious incidents impacting customers or business
- Desire for continuous improvement
- Regulatory or compliance requirements
Context factors that say NO:
- Minor or trivial incidents
- Lack of resources or time
Context factors that say SOME:
- Post-mortems for major incidents only
24. Should we rotate team members through on-call duties?
Context factors that say YES:
- Shared operational responsibility culture
- Desire to avoid burnout
- Cross-training benefits
Context factors that say NO:
- Small teams with dedicated on-call engineers
- Lack of team willingness or skills
Context factors that say SOME:
- Partial rotation or voluntary participation
25. Should we require documentation updates with every code change?
Context factors that say YES:
- Complex systems requiring up-to-date documentation
- Regulatory or compliance needs
- Desire for knowledge sharing
Context factors that say NO:
- Rapid prototyping or exploratory work
- Small or informal teams
Context factors that say SOME:
- Documentation updates for major or public-facing changes
26. Should we use semantic versioning for our releases?
Context factors that say YES:
- Public APIs or libraries
- Multiple consumers or integrations
- Desire for clear version communication
Context factors that say NO:
- Internal tools with limited distribution
- Small projects without version dependencies
Context factors that say SOME:
- Semantic versioning for major releases only
27. Should we maintain a changelog?
Context factors that say YES:
- Public-facing software
- Multiple stakeholders or users
- Regulatory or compliance requirements
Context factors that say NO:
- Internal or single-user tools
- Rapidly evolving prototypes
Context factors that say SOME:
- Changelog for major releases or stable versions
28. Should we have dedicated sprint planning meetings?
Context factors that say YES:
- Agile teams practicing Scrum
- Need for clear sprint goals and commitments
- Stable team composition
Context factors that say NO:
- Kanban or continuous flow teams
- Very small or informal teams
Context factors that say SOME:
- Lightweight planning sessions or backlog grooming
29. Should we use Scrum, Kanban, or a hybrid approach?
Context factors that say SCRUM:
- Teams needing structured iterations and roles
- Desire for predictable delivery cadence
Context factors that say KANBAN:
- Continuous flow work
- Desire for flexibility and WIP limits
Context factors that say HYBRID:
- Teams with mixed work types
- Transitioning between methodologies
30. Should we track technical debt explicitly?
Context factors that say YES:
- Long-lived codebases
- Desire to manage and reduce debt
- Teams with capacity for refactoring
Context factors that say NO:
- Short-term projects or prototypes
- Teams with no bandwidth for debt management
Context factors that say SOME:
- Track debt for critical components only
31. Should we require security reviews for all features?
Context factors that say YES:
- Features with sensitive data or compliance requirements
- High-risk security exposure
- Regulatory or audit needs
Context factors that say NO:
- Low-risk internal tools
- Early-stage prototypes
Context factors that say SOME:
- Security reviews for major or public-facing features only
Context factors that say YES:
- Systems with high traffic or performance requirements
- Critical infrastructure or services
- Desire to prevent production outages
Context factors that say NO:
- Small-scale or low-traffic applications
- Early-stage development
Context factors that say SOME:
- Load testing for major releases or critical components
33. Should we use blue-green deployments?
Context factors that say YES:
- Need for zero-downtime deployments
- Complex or stateful applications
- Desire to reduce deployment risk
Context factors that say NO:
- Simple applications with low availability requirements
- Lack of deployment infrastructure
Context factors that say SOME:
- Blue-green for critical services only
34. Should we implement canary releases?
Context factors that say YES:
- Desire to test new features with limited user impact
- Mature CI/CD pipelines
- Ability to monitor and rollback quickly
Context factors that say NO:
- Small teams or projects without monitoring
- Simple deployment processes
Context factors that say SOME:
- Canary releases for major features or high-risk changes
35. Should we require integration tests for all API endpoints?
Context factors that say YES:
- APIs with multiple consumers
- Critical business functionality
- Desire for high reliability
Context factors that say NO:
- Internal or low-impact APIs
- Early-stage or prototype APIs
Context factors that say SOME:
- Integration tests for critical or public APIs only
36. Should we use contract testing between services?
Context factors that say YES:
- Microservices architecture
- Multiple teams owning different services
- Desire to prevent integration issues
Context factors that say NO:
- Monolithic applications
- Small teams with tightly coupled services
Context factors that say SOME:
- Contract testing for critical service interactions
37. Should we maintain separate environments for dev, staging, and production?
Context factors that say YES:
- Need for safe testing and validation
- Regulatory or compliance requirements
- Multiple deployment stages
Context factors that say NO:
- Small projects or solo developers
- Simple deployment pipelines
Context factors that say SOME:
- Separate environments for critical or complex systems
38. Should we gate deployments on successful automated test runs?
Context factors that say YES:
- Desire to prevent regressions
- Mature automated test suites
- Continuous delivery culture
Context factors that say NO:
- Lack of automated tests
- Manual deployment processes
Context factors that say SOME:
- Gate deployments for critical releases only
39. Should we require rollback plans for all deployments?
Context factors that say YES:
- High-risk or critical systems
- Complex deployment processes
- Regulatory or compliance needs
Context factors that say NO:
- Simple or low-risk deployments
- Early-stage projects
Context factors that say SOME:
- Rollback plans for major or production releases
40. Should we use infrastructure as code?
Context factors that say YES:
Context factors that say NO:
Context factors that say SOME:
41. Should we conduct sprint demos at the end of each iteration?
Context factors that say YES:
- Need to showcase progress and gather feedback
- Stakeholders expect regular updates
- Team benefits from reflection and celebration
Context factors that say NO:
- Very short or continuous flow iterations
- Low stakeholder interest or availability
- Overhead outweighs benefits for small teams
Context factors that say SOME:
- Conduct demos for major milestones only
- Use lightweight demos or asynchronous updates
42. Should we invite stakeholders to sprint demos?
Context factors that say YES:
- Stakeholders actively involved in product direction
- Desire for transparency and alignment
- Need for early feedback to reduce rework
Context factors that say NO:
- Stakeholders unavailable or uninterested
- Sensitive or internal-only content
- Risk of scope creep from stakeholder input
Context factors that say SOME:
- Invite key stakeholders selectively
- Use separate sessions for detailed feedback
43. Should we use acceptance criteria for all user stories?
Context factors that say YES:
- Need clear definition of done
- Complex or critical features
- Desire to reduce ambiguity and rework
Context factors that say NO:
- Very small or trivial tasks
- Exploratory or research stories
Context factors that say SOME:
- Use for major or customer-facing stories
- Lightweight criteria for simple stories
44. Should we require product owner sign-off before marking work complete?
Context factors that say YES:
- Formal product ownership and accountability
- Need to ensure business requirements met
- Compliance or contractual obligations
Context factors that say NO:
- Small teams with shared responsibility
- Rapid prototyping or continuous delivery
Context factors that say SOME:
- Sign-off for major features or releases
- Use automated acceptance tests as proxy
45. Should we conduct usability testing with actual users?
Context factors that say YES:
- User-facing products with significant UI
- Need to validate user experience and workflows
- Desire to reduce support costs and improve satisfaction
Context factors that say NO:
- Backend or internal tools with limited UI
- Early exploratory prototypes
Context factors that say SOME:
- Conduct limited testing on key workflows
- Use heuristic or expert reviews as supplement
46. Should we track and measure DORA metrics?
Context factors that say YES:
- Desire to improve software delivery performance
- Teams practicing DevOps and continuous delivery
- Need to benchmark and identify bottlenecks
Context factors that say NO:
- Small teams with informal processes
- Lack of automation or telemetry data
Context factors that say SOME:
- Track select metrics relevant to team goals
- Use metrics for improvement, not punishment
47. Should we have Service Level Objectives (SLOs) for our systems?
Context factors that say YES:
- Customer-facing or critical systems
- Need to define reliability and performance targets
- Support for incident management and alerting
Context factors that say NO:
- Internal or non-critical systems
- Early-stage or experimental projects
Context factors that say SOME:
- Define SLOs for key services only
- Use informal targets for less critical systems
Context factors that say YES:
- Complex, distributed, or microservices architectures
- Need to diagnose production issues quickly
- Desire to improve system reliability and performance
Context factors that say NO:
- Simple or monolithic applications
- Lack of resources or expertise
Context factors that say SOME:
- Use selectively for critical components
- Combine with traditional monitoring
49. Should we conduct chaos engineering experiments?
Context factors that say YES:
- High-availability, critical systems
- Desire to proactively identify weaknesses
- Mature DevOps and testing culture
Context factors that say NO:
Context factors that say SOME:
Context factors that say YES:
- Systems with uptime or reliability commitments
- Need to coordinate cross-team response
- Compliance or regulatory requirements
Context factors that say NO:
- Small teams with informal communication
- Low-risk or non-critical systems
Context factors that say SOME:
- Lightweight process for small teams
- Formalize as systems grow in complexity