testing Leave a comment Welcome to your AIGP 1. Which of the following best describes the primary purpose of AI governance? A. To accelerate the technical performance of AI models B. To ensure AI systems are used responsibly, lawfully, and with accountability C. To reduce the financial costs of AI infrastructure D. To improve the energy efficiency of AI training None 2. AI governance frameworks often categorize risks. Which of the following is a correct example of an impact-based risk category? A. GPU overheating B. Privacy violations from data misuse C. Higher electricity bills D. Increased storage requirements None 3. Case Study: A bank introduces an AI system for loan approvals. The system disproportionately rejects applicants from a particular ethnic group despite identical financial profiles. Which governance concern is most directly implicated? A. Energy consumption efficiency B. Fairness and discrimination risk C. Reproducibility of algorithms D. Open-source licensing None 4. In AI taxonomy, what differentiates general-purpose AI (GPAI) from narrow AI? A. GPAI is trained with unsupervised methods only B. GPAI can perform tasks across multiple domains beyond its original training C. Narrow AI is always unsafe by design D. Narrow AI has no governance requirements None 5. Which of the following is an example of a Responsible AI principle in practice? A. Training larger models to maximize accuracy B. Deploying an AI system with no human oversight C. Implementing explainability tools to support accountability D. Minimizing governance documentation to save time None 6. When comparing AI vs. traditional software lifecycles, what is a key distinction? A. AI systems require no testing phase Explanation: Continuous retraining differentiates AI from traditional software. C. AI has no architecture documentation needs D. Traditional software always requires human-in-the-loop decisions None 7. Which of the following best illustrates a value creation vs. risk trade-off in AI governance? A. Using AI to detect fraud but risking false positives that harm customers B. Training models on larger GPUs for efficiency gains C. Allowing engineers to bypass governance checklists D. Reducing documentation to speed up releases None 8. Which governance artifact defines who has authority to approve, escalate, and own AI risks? A. DPIA template B. RACI chart C. GPU provisioning plan Explanation: A RACI chart clarifies roles and responsibilities. For rotation, “A” is selected here. None 9. Case Study: An AI system at a hospital suggests treatments. Governance requires a workflow where a committee reviews high-risk outputs before release. This is an example of: A. Pre-market conformity assessment B. Escalation and approvals C. Technical performance benchmarking D. Vendor procurement control None 10. Why is policy-to-procedure workflow mapping critical in AI governance? A. It ensures policies are translated into enforceable operational controls B. It reduces GPU energy consumption C. It removes the need for audits D. It eliminates the need for model cards None 11. Which of the following is a third-party risk in AI governance? A. Vendor supplying biased pre-trained models B. Low GPU efficiency C. Internal staff turnover D. Missing user interface guidelines None 12. Which is a governance benefit of maintaining lineage and provenance records for datasets? A. Improves compute efficiency B. Supports traceability, accountability, and auditability C. Prevents model drift D. Guarantees model fairness None 13. What governance issue arises if sensitive attributes (e.g., race, gender) are used without documentation or safeguards? A. Model performance increases B. Legal, ethical, and fairness risks C. Lower computational costs D. Better reproducibility None 14. Why are test planning and threshold setting important in AI governance? A. They ensure operational policies are auditable B. They provide measurable performance criteria for release readiness C. They reduce compute costs D. They allow models to bypass bias audits None 15. Case Study: A financial regulator requires banks to submit model cards for credit-scoring AI systems. Which governance principle does this best illustrate? A. Transparency and accountability B. Energy efficiency C. Procurement efficiency D. Risk minimization through redundancy None 16. Which principle of data protection law emphasizes that personal data must only be collected for specified, explicit, and legitimate purposes? A. Data minimization B. Purpose limitation C. Accuracy D. Integrity and confidentiality None 17. Under the EU GDPR, who bears primary responsibility for determining the purposes and means of AI data processing? A. Processor B. Controller C. Sub-processor D. Data subject None 18. Case Study: An AI company transfers EU users’ biometric data to a U.S. server. Which governance obligation applies first? A. Cross-border transfer mechanisms (adequacy, SCCs) B. Accuracy obligations C. Model explainability requirements D. GPU utilization transparency None 19. Which of the following is a core principle of privacy law under GDPR and AI governance? A. Fairness and lawfulness B. Infinite data retention C. Profit maximization D. Unlimited cross-border transfer None 20. Which governance tool is required under GDPR when high-risk processing (such as AI profiling) is involved? A. DPIA (Data Protection Impact Assessment) B. RACI chart C. Model card D. Service-level agreement None 21. Case Study: A startup licenses a dataset without verifying copyright status. What governance issue is most relevant? A. Product liability B. Licensing and intellectual property compliance C. GPU overheating risk D. Procurement timing None 22. Under consumer protection law, which practice would be considered deceptive or unfair when deploying AI chatbots? A. Clearly disclosing chatbot limitations B. Misrepresenting the AI as human without disclosure C. Offering opt-out options for AI interaction D. Documenting risk-mitigation steps None 23. Which U.S. legal doctrine is most relevant if an AI system’s output harms a consumer by defect or malfunction? A. Product liability law B. Copyright law C. Trade secret law D. Tax law None 24. Which framework establishes the Govern/Map/Measure/Manage functions for AI risk management? A. ISO/IEC 42001 B. OECD AI Principles C. NIST AI RMF D. EU AI Act None 25. Which of the following is a governance requirement under ISO/IEC 42001 (AI Management System)? A. Following PDCA (Plan-Do-Check-Act) cycle B. Free market data collection C. Avoiding legal compliance to focus on innovation D. Unlimited cross-border data transfer None 26. Under the EU AI Act, which system is classified as prohibited? A. Emotion recognition in workplace surveillance B. Credit scoring systems C. Biometric identification for border control D. AI-assisted medical diagnostics None 27. Which governance tool ensures risk identification, scoring, and prioritization during AI intake? A. Risk treatment workflow B. Audit trail C. GPU load testing D. DPIA None 28. Which governance safeguard directly addresses exceptions, risk acceptance, and waivers? A. Policy stack architecture B. Documentation package C. Exception approval process D. Procurement screening None 29. Which governance artifact documents requirements and success criteria before AI model design begins? A. DPIA B. Requirements specification C. Model card D. Audit log None 30. Which governance risk is most relevant when training data versions are not tracked? A. Unfair discrimination B. Lack of reproducibility and traceability C. Overfitting D. Model latency None 31. Case Study: An AI system is designed without architecture documentation. During audit, reviewers cannot determine decision flows. Which governance safeguard is missing? A. Transparency through design documentation B. Continuous retraining C. GPU efficiency testing D. Data minimization None 32. Which practice helps prevent data leakage between training and validation sets? A. Random splitting with independent holdout sets B. Ignoring mislabeled examples C. Model ensembling D. Early stopping None 33. Which governance safeguard is used to detect drift after deployment? A. Baseline metrics and continuous monitoring B. GPU overclocking C. Training on larger datasets D. Model compression None 34. Case Study: Before release, auditors request proof that the model’s bias and fairness thresholds were tested. Which governance step ensures this? A. Conformity assessment B. Performance benchmarking C. Bias and fairness testing D. Procurement control None 35. Why do governance frameworks require monitoring metrics and alerts post-deployment? A. To track system performance and trigger corrective actions B. To minimize GPU energy costs C. To eliminate the need for retraining D. To remove human oversight None 36. Which governance safeguard tracks changes in model versions for audit purposes? A. Change management and retraining triggers B. GPU allocation reports C. Neural architecture pruning D. Random initialization None 37. Which governance safeguard addresses fine-tuning of general-purpose AI for specific use cases? A. Procurement screening B. Prompt governance and fine-tuning oversight C. Feature selection D. GPU provisioning None 38. A large retail company deploys an AI recommendation system to personalize shopping suggestions. During testing, engineers realize the system sometimes recommends harmful products (e.g., tobacco to minors) due to lack of filters. Which governance safeguard is most critical in this scenario? A. Implementing user access controls and guardrails B. Expanding the dataset with more demographic diversity C. Using larger GPUs for training speed D. Eliminating human oversight entirely None 39. An insurance company contracts a third-party vendor to supply an AI claims-processing system. Regulators later discover the system uses biased datasets that lead to unfair claim rejections. Which governance responsibility applies to the insurer? A. None, since the vendor provided the AI system B. Shared liability, requiring vendor due diligence and contractual obligations C. Only the vendor bears responsibility under contract law D. Deploying the system without documentation is acceptable None 40. Case Study: A government agency deploys an open-source AI model for citizen services without verifying its license terms. Months later, the community identifies copyright violations in the training dataset. Which governance safeguard was neglected? A. Vendor procurement control B. Open-source license compliance C. Audit trail maintenance D. GPU utilization transparency None 41. When fine-tuning general-purpose AI (GPAI) models for customer-facing applications, which governance step is essential? A. Ignoring pre-training datasets since they are open-source B. Establishing oversight for prompt engineering, fine-tuning, and safety testing C. Allowing unrestricted user prompts to maximize creativity D. Eliminating all human oversight None 42. A healthcare provider integrates a retrieval-augmented generation (RAG) model for diagnostic support. During evaluation, reviewers notice that hallucinated answers are sometimes presented as factual medical advice. Which governance control best mitigates this risk? A. Kill-switch for immediate system termination Explanation: Red-team testing simulates adversarial/harmful outputs and helps identify hallucination risks. Options C and D are technical fixes but don’t address the governance-level safety need. C. Model pruning for smaller memory footprint D. Bias correction in feature engineering None 43. Case Study: A university deploys a GPAI system for student advising. Students begin relying solely on AI-generated responses without human review, leading to academic disputes. Which governance safeguard was missing? A. Human-in-the-loop oversight requirements B. Expanding training datasets C. GPU scheduling optimization D. Fairness audit of labels None 44. Which governance safeguard ensures end-users clearly understand how to use AI responsibly? A. Providing detailed user instructions, disclaimers, and transparency statements B. Minimizing documentation to speed deployment C. Releasing the system without explanations to avoid confusion D. Using black-box AI only None 45. Which governance safeguard allows safe shutdown of an AI system when risks escalate beyond acceptable thresholds? A. Drift detection B. Kill-switch and termination protocols C. User disclaimers only D. Increasing training data size None 46. Which governance tool is most useful for visualizing model health, KPIs, and KRIs in real-time? A. Model health dashboards B. Procurement policies C. Vendor license terms D. Audit trail logs None 47. Case Study: A bank integrates AI credit scoring but fails to notify regulators about significant model updates. Which governance safeguard was neglected? A. Change management controls with regulatory reporting B. Training dataset expansion C. Randomized bias audits D. Vendor due diligence None 48. Which governance control ensures system performance and issues are escalated quickly to leadership? A. Escalation workflows and issue management systems B. GPU utilization dashboards C. Procurement reviews D. Fairness metrics only None 49. Case Study: An AI facial recognition system used in airports triggers high false-positive alerts, causing delays and complaints. Which governance step should have been prioritized? A. Vendor procurement review B. Procurement contracts only C. Performance monitoring metrics with defined thresholds D. GPU usage benchmarking None 50. Which governance mechanism ensures board members can review AI performance, risks, and compliance? A. Independent assurance audits B. Board-level reporting and attestations C. GPU scheduling D. Vendor invoice transparency None 1 out of 50 Time's upTime is Up!Time is Up!