Advanced Programming and Client-Based Design
At Year 4 Advanced, Design requires not just creating functional products but evaluating them rigorously against client needs, applying UX research methods, and designing detailed test protocols. Programming moves beyond basic code to object-oriented paradigms and APIs.
What You'll Learn
- Apply the complete MYP Design Cycle: Inquire, Plan, Create, Evaluate
- Write detailed, measurable design specifications linked to client needs
- Understand object-oriented programming (OOP) concepts: classes, objects, methods
- Apply UX research methods to investigate user needs before designing
- Create wireframes and interactive prototypes for digital products
- Design and execute detailed test protocols with objective success criteria
- Evaluate products critically, identifying specific improvements with justification
IB Assessment Focus
Criterion A (Inquire): Identify and explain the design problem; research context, client needs, and existing solutions; write detailed design specification with measurable criteria.
Criterion B (Plan): Develop detailed design plans and justify design choices; produce annotated sketches or wireframes; plan testing protocol.
Criterion C (Create): Demonstrate technical skill; follow and adapt the plan; document the creation process.
Criterion D (Evaluate): Test rigorously against design specification; evaluate honestly (strengths AND limitations); identify specific, justified improvements.
Key Vocabulary
| Term | Definition |
|---|---|
| Design cycle | Inquire → Plan → Create → Evaluate. An iterative process for developing solutions. |
| Design specification | A detailed list of measurable criteria the product must meet, derived from client/user needs |
| OOP (Object-Oriented Programming) | A paradigm organising code into objects with properties (attributes) and behaviours (methods) |
| API | Application Programming Interface — allows programs to communicate with external services or datasets |
| User testing | Testing a product with representative users to identify real usability problems |
| Iteration | Repeating the design-build-test cycle to progressively improve a product |
| Prototype | A working model of a design used for testing before full development; may be low-fidelity (wireframe) or high-fidelity (interactive) |
| UX research | Investigating user needs, behaviours, and pain points to inform design decisions |
The MYP Design Cycle
The MYP Design Cycle is an iterative, not linear, process. Each phase informs the others; evaluation leads back to new inquiry and improvement. At Year 4, you must apply each phase with depth and genuine critical thinking.
Four Phases of the Design Cycle
| Phase | Key activities | Year 4 Advanced requirement |
|---|---|---|
| Inquire and Analyse | Identify the problem; research context, client, users, existing solutions; write design specification | Detailed needs analysis; research of multiple existing solutions with evaluation; design specification with measurable criteria for EACH point |
| Develop Ideas | Generate multiple design ideas; annotate and compare; select and justify best solution | At least 3 genuinely different design ideas; annotated evaluation of each against the specification; justified selection with acknowledgment of trade-offs |
| Create the Solution | Construct the product following the plan; document deviations and reasons; show skills development | Detailed creation log; justification for deviations from plan; evidence of technical skill; iterative improvements during creation |
| Evaluate | Test against design specification; evaluate with client/users; identify improvements | Objective testing for EVERY specification criterion; evaluation by actual users; honest identification of limitations; specific, justified improvements |
After evaluation, designers always identify improvements. These improvements require new inquiry (understanding why something failed), new design ideas (how to fix it), and a new creation iteration. The most sophisticated Year 4 responses show evidence of multiple iteration cycles — not a single linear pass from inquiry to evaluation.
Programming Concepts — OOP and APIs
At Year 4, programming moves beyond procedural scripts to object-oriented paradigms and integration with external data sources via APIs. Understanding these concepts allows you to build more complex, maintainable, and powerful applications.
Object-Oriented Programming (OOP)
- Class: A blueprint or template defining the attributes (data) and methods (behaviours) that objects of that type will have. E.g., a
Studentclass. - Object: A specific instance of a class. E.g.,
student1is an object of theStudentclass with specific attribute values (name="Alice", grade=9). - Attribute: A variable that stores data about the object. E.g.,
student.name,student.age. - Method: A function defined inside a class that describes what objects can do. E.g.,
student.submit_assignment(). - Inheritance: A subclass inherits attributes and methods from a parent class, allowing code reuse and extension.
OOP organises code so that real-world entities (students, products, users) are represented as objects. This makes code: (1) more modular — each object manages its own data; (2) more maintainable — changes to a class affect all objects; (3) easier to scale — adding new entity types requires creating new classes, not rewriting everything. At Year 4, you should be able to design a class structure for a given client problem and justify why OOP is appropriate.
APIs (Application Programming Interfaces)
- An API allows your program to request data or services from an external source (another application, database, or web service) without knowing its internal implementation.
- Example: A weather app uses the OpenWeatherMap API to request temperature data — your code sends a request, the API returns the data in a standardised format (typically JSON).
- At Year 4, you should be able to: explain what an API is, describe how it is used in a product, and evaluate its advantages and risks.
| API advantage | API limitation/risk |
|---|---|
| Access to external data without building from scratch | Dependency on third party — if the API changes or is discontinued, your product breaks |
| Standardised, tested data formats (JSON, REST) | Rate limits: APIs often restrict number of requests per minute/day |
| Enables features that would require enormous resources to build independently (maps, payments, authentication) | Privacy and data security: user data may be shared with third parties |
UX Research and Prototyping
UX (User Experience) research discovers what users actually need — not what designers assume they need. At Year 4, demonstrating rigorous UX research before designing is essential for Criterion A.
UX Research Methods
| Method | What it discovers | Strength | Limitation |
|---|---|---|---|
| User interviews | Attitudes, needs, and pain points in depth | Rich qualitative data; uncovers unexpected needs | Small sample; time-consuming; interviewer bias possible |
| Surveys/questionnaires | Preferences and behaviours at scale | Large sample; quantifiable data | Surface-level; users may give aspirational rather than true responses |
| Observation / contextual inquiry | How users actually behave (not what they say) | Reveals real usage patterns; uncovers workarounds | Time-consuming; observer effect |
| Analysis of existing solutions | What works and what doesn't in current products | Identifies gaps and best practices | May bias design toward incremental improvement rather than innovation |
Wireframes vs Prototypes
| Type | Description | Purpose | When used |
|---|---|---|---|
| Wireframe (low-fidelity) | Static sketch showing layout, navigation, and content hierarchy; no visual styling | Test structure and flow without investing in visual design | Early in the design process; before committing to visual direction |
| Prototype (high-fidelity) | Interactive mock-up demonstrating how the product will look and function | Realistic user testing before development | After wireframes are validated; before full programming begins |
Finding usability problems in a prototype costs almost nothing to fix (move elements in a design tool). Finding the same problem after full programming costs hours of rework. At Year 4, creating and testing a prototype before writing production code is a mark of sophisticated design thinking.
Testing and Evaluation
Evaluation is the most discriminating criterion at Year 4. Generic claims ("my product works well") earn no marks. You must design specific tests, execute them with real users or objective measures, report actual results, and use those results to justify specific improvements.
Types of Testing
| Test type | What it tests | Example |
|---|---|---|
| Usability testing | Can users complete key tasks without difficulty? | 5 users attempt to find a book in the library catalogue; measure success rate and time |
| Performance testing | Does the product meet speed/efficiency criteria? | Measure page load time; check against specification criterion (<3 seconds) |
| Functionality testing | Does each feature work as intended? | Test all buttons, forms, links, and error states systematically |
| Accessibility testing | Can users with different abilities use the product? | Check colour contrast ratio (WCAG AA: minimum 4.5:1); keyboard navigation; screen reader compatibility |
| Client/user feedback | Does the product meet client requirements and user expectations? | Client reviews the product against the original design brief; users rate satisfaction |
Structure of a Year 4 Evaluation
- State the test: What was tested, how, who tested it, what the success criterion was.
- State the result: Actual, objective outcome — numbers, observations, user quotes.
- Evaluate: Did it meet the criterion? Why or why not?
- Suggest improvement: Specific change, justified by the test result, that would address the limitation.
Test: "5 students aged 12–14 attempted to find the library's opening hours on the website without assistance. Success criterion: 4/5 find it in under 30 seconds."
Result: 2/5 found it in under 30 seconds; 2 more found it but took 45–60 seconds; 1 failed to find it.
Evaluation: The test failed to meet the criterion. The opening hours are buried in the 'About' section with no link from the homepage. This violates the design specification criterion for information findability.
Improvement: Add an 'Opening Hours' widget to the homepage or create a persistent footer link. This is directly justified by the test evidence.
Worked Examples
Model responses at Year 4 Advanced Design standard.
Test 2 — Load time: Measure page load time on three separate devices (school desktop, student laptop, smartphone) using browser developer tools. Success criterion: <3 seconds on standard school Wi-Fi (25 Mbps). Rationale: slow load times reduce engagement, particularly on mobile.
Test 3 — Accessibility: Check colour contrast ratio of all text/background combinations using WebAIM contrast checker. Success criterion: minimum 4.5:1 ratio for normal text (WCAG AA standard). Also test keyboard navigation (tab order). Rationale: accessibility is a design specification criterion and legal requirement.
Test 4 — Mobile responsiveness: View website at 320px, 768px, and 1200px widths. Success criterion: all content readable and buttons functional at each breakpoint. Rationale: 40% of school library searches occur on mobile devices (from UX research).
Test 5 — Functionality: Systematically test all links, search functionality, form submissions, and error states. Success criterion: 0 broken links; search returns relevant results for 10 test queries; forms provide appropriate validation feedback. Rationale: functional failures undermine the product's core purpose.
Each test maps to a specific design specification criterion, ensuring the evaluation is objective and evidence-based.
For a school management system, OOP is highly appropriate because the system naturally involves multiple distinct entity types:
• A
Student class with attributes (name, grade, ID) and methods (enrol_course(), submit_assignment(), view_grades()).• A
Teacher class with different attributes and methods (create_assignment(), mark_submission(), view_class()).• A
Course class with enrolled students, assigned teachers, and associated assignments.Advantages for this system:
1. Modularity: Changes to the Student class (e.g., adding a new attribute) don't require rewriting teacher or course code.
2. Reusability: Common behaviours (e.g., login methods) can be inherited from a parent
User class by both Student and Teacher subclasses, avoiding code duplication.3. Scalability: Adding a
Librarian class requires creating a new subclass inheriting User behaviours, not rewriting the whole system.4. Maintainability: Each class is self-contained; bugs in student functionality don't affect teacher functionality.
OOP is the industry standard for large, complex systems precisely because these properties make the codebase manageable over time.
Problems:
1. No reference to the design specification — evaluation must test the product against specific, measurable criteria established before creation. "Worked well" has no meaning without criteria.
2. No evidence — "users liked it" is a subjective claim unsupported by any data (user test results, ratings, specific feedback).
3. No identification of limitations — every product has limitations; failing to identify them suggests either superficial evaluation or unwillingness to be honest about shortcomings.
4. No improvements suggested — Year 4 requires specific, justified improvements based on test results.
5. No test protocol described — who tested, what they tested, success criterion, actual result.
What a Year 4 evaluation requires: "Test 1: 5 users attempted [task]. Success criterion: [X]. Result: [actual data]. The product [met/failed to meet] this criterion because [specific reason]. To improve, I would [specific change] because [justification linked to test result]."
Practice Q&A
Attempt each question before revealing the model answer.
Prototype: An interactive, higher-fidelity mock-up that demonstrates how the product looks and functions. Can be clicked through, simulating real user experience. Purpose: test usability and user flow realistically before writing production code.
Why use both: Wireframes are faster and cheaper to create and revise — ideal for early structural decisions. Prototypes are necessary for valid usability testing (users need a realistic experience to give meaningful feedback). Moving from wireframe to prototype ensures structural problems are solved before visual and interactive investment is made. Finding a navigation problem at the wireframe stage takes minutes to fix; at the prototype stage takes hours; after full development takes days.
This is valid because it is: specific (booking a school event), measurable (3 steps or fewer; 90 seconds), testable (5 users; specific protocol), and linked to a user need (efficiency of booking process).
Invalid criterion example: "The app should be easy to use." This fails because "easy" is not measurable and provides no basis for objective testing.
Student project example: A student creating a local weather app for a school garden project could use the OpenWeatherMap API. Their app sends an HTTP request to the API with the school's location; the API returns current temperature, humidity, and precipitation forecast in JSON format; the app displays this data to gardeners to help them decide when to water.
Benefit: The student doesn't need to build a weather data collection system — they access a professionally maintained global weather database for free.
Risk to acknowledge: Dependency on a third-party service; the free tier has rate limits (limited requests per day); if the API changes its format, the app must be updated.
1. Early (wireframe stage): Users can identify whether the navigation structure makes sense and whether they can find key features. Fixing structural problems at this stage costs almost nothing.
2. Mid-development (prototype stage): Users can test realistic interactions, revealing usability problems before programming investment is made.
3. After launch (Criterion D evaluation): Testing the finished product with users generates objective evidence about whether the design specification criteria were met.
The key insight is that user testing should not be left until the end. Designers often assume they know what users need; user testing reveals the gap between designer assumptions and actual user behaviour. Testing at every stage catches problems earlier when they are cheaper to fix.
Limitation of user testing: Small samples may not represent the full user population; users' stated preferences may not reflect actual usage; testing in artificial conditions may not capture real-world constraints.
Attributes: title, author, ISBN, genre, copies_available, copies_total
Methods: check_out(student_id), return_book(), is_available(), get_details()
Class: Student
Attributes: student_id, name, grade, borrowed_books (list of Book objects), max_loans
Methods: borrow_book(book), return_book(book), view_loans(), search_catalogue(keyword)
Class: Librarian (inherits from Staff)
Attributes: staff_id, name
Methods: add_book(book_details), remove_book(ISBN), process_return(student, book), generate_report()
Class: Loan
Attributes: book, student, date_borrowed, due_date, date_returned
Methods: calculate_fine(), mark_returned()
Justification: OOP is appropriate here because: each entity type has distinct data and behaviours; relationships between entities (Student borrows Book; Loan links both) are clearly modelled; adding new entity types (e.g., eBook, Teacher) requires only new classes, not system rewrite.
Ethical concerns:
1. Privacy: Students (especially minors) have a right to privacy. Data collected about their reading habits, attendance, or behaviour should be used only for the stated purpose and not shared without consent.
2. Consent and transparency: Students and parents must be informed about what data is collected, how it is stored, and who has access. For minors, parental consent is legally required in many jurisdictions (GDPR, COPPA).
3. Data security: Student data stored on school apps is a high-value target. Breaches could expose personal information. Designers must implement appropriate security (encryption, minimal data collection).
4. Algorithmic bias: If the app uses data to make recommendations or decisions (e.g., which students to support), biases in the data can produce unfair outcomes.
5. Data minimisation: Collect only the data genuinely needed for the product's purpose. Collecting additional data "in case it's useful later" is ethically problematic and legally questionable.
Conclusion: Data collection in school apps is ethically justifiable when it is transparent, consensual, minimal, secure, and purposeful. The designer bears responsibility for ethical data governance, not just technical implementation.
• Inquiry reveals needs you didn't know about, requiring new design ideas.
• Design ideas reveal constraints (technical, budget, time) that require modified specifications.
• Creation reveals unexpected technical challenges that require design adaptation.
• Evaluation always reveals usability issues, missed requirements, or improved possibilities that require new inquiry and design.
A linear process assumes you can know everything upfront — an unrealistic assumption for any complex product. Iteration manages uncertainty by building, testing, and learning in small cycles rather than committing to a complete plan that may be wrong. This is why the most successful products (from software to consumer goods) go through multiple design iterations before reaching a final form.
At Year 4, a design portfolio that shows only a linear progression from first idea to finished product is a red flag — it suggests either no genuine iteration occurred or the student hasn't reflected on the inevitable changes that any real design process involves.
Flashcard Review
Tap each card to reveal the answer. Try to answer from memory first.
2. Develop Ideas (generate and evaluate multiple designs)
3. Create the Solution (build following the plan)
4. Evaluate (test against specification, identify improvements)