← Back to Part 1
Part 1: Critical Thinking Quiz
Test your understanding of AI foundations, architecture, and prompting.
1. What distinguishes "Traditional Programming" from "Machine Learning"?
Traditional programming uses Python; ML uses C++.
Traditional programming is deterministic (rule-based); ML is probabilistic (inference-based).
Traditional programming is faster; ML is slower.
Traditional programming uses CPUs; ML uses GPUs.
2. In the context of AI models, what is a "Hallucination"?
When the model refuses to answer.
When the model generates a grammatically correct but factually invalid response.
When the model crashes due to lack of VRAM.
When the model becomes sentient.
3. What does the term "inference" refer to in AI?
The process of training the model on a massive corpus.
The process of feeding live data into a static model to get a prediction.
The mathematical adjustment of weights and biases.
The physical architecture of the GPU.
4. Which component is identified as the critical bottleneck for running large AI models locally?
CPU Clock Speed
SSD Storage Capacity
VRAM (Video RAM) Capacity
Internet Connection Speed
5. Why are GPUs preferred over CPUs for AI workloads?
GPUs have higher clock speeds.
GPUs are optimized for sequential processing.
GPUs are optimized for parallel processing (SIMD).
GPUs consume less power.
6. What is the primary trade-off when running AI locally (Edge) vs. in the Cloud?
Local is more expensive per query.
Local offers less privacy.
Local has hardware constraints (model size) but offers data sovereignty.
Cloud is slower due to latency.
7. According to the "PTC Model" of prompt architecture, what are the three key components?
Python, Tensor, Compute
Persona, Task, Constraints
Prompt, Test, Check
Pre-training, Tokenization, Context
8. Why is "Ambiguity" considered the enemy in prompt engineering?
It makes the prompt too short.
It forces the model to "guess" missing constraints, leading to generic outputs.
It increases the cost of the API call.
It causes the model to overheat.
9. What is the "Context Window"?
The graphical user interface of the chatbot.
The model's limit on how much text it can "remember" or process at once.
The time of day when the server is least busy.
The physical screen size of the device.
10. What is "Few-Shot Inference"?
Prompting the model without any examples.
Providing multiple examples of the desired input/output to define the task.
A method that uses fewer tokens to save money.
Submit for Scoring
Download Txt