GenAI Knowledge Check: Master Summary

 

The Architecture (Questions 1, 2 & 9)

These questions focus on how a model is built and its physical limitations.

  • 1. Parameters:

    • Answer: Internal weights and settings that define the model's structure and intelligence.

    • Concept: Think of these as the "knobs" the model adjusts during training. More parameters often equal a more capable (but slower) model.

  • 2. Context Window Limit:

    • Answer: The model drops the earliest information to make room for new data, potentially leading to hallucinations.

    • Concept: Like short-term memory. Once it’s full, the "oldest" info is deleted so it can keep talking, which can cause it to lose track of original instructions.

  • 9. High-Volume/Low-Latency Tasks:

    • Answer: Small Language Models (SLMs).

    • Concept: If you need speed and repetition over deep reasoning, a smaller, lighter model is faster and cheaper than a massive "Frontier" model.


Enterprise Strategy (Questions 3, 4 & 8)

These focus on how businesses actually use AI to gain an advantage.

  • 3. The Competitive Moat:

    • Answer: Connecting GenAI to unique, proprietary data and domain expertise.

    • Concept: Everyone has the model; not everyone has your company's private data. That's the secret sauce.

  • 4. RAG (Retrieval-Augmented Generation):

    • Answer: It allows the model to look up real-time information from external trusted sources before generating an answer.

    • Concept: The "Open Book" method. It searches your files first, then answers based on what it found.

  • 8. Grounding:

    • Answer: It anchors the model's responses in specific, verified organizational data to reduce hallucinations.

    • Concept: Ensuring the AI "stays in its lane" by forcing it to use specific, verified facts rather than guessing.


Agents & Reasoning (Questions 5, 7 & 10)

These look at how AI moves from "chatting" to "doing."

  • 5. GenAI vs. AI Agents:

    • Answer: GenAI is for single-step generation, while agents use reasoning for multi-step, adaptive workflows.

    • Concept: GenAI is a calculator; an Agent is a mathematician who knows which buttons to press to solve a long word problem.

  • 7. The Intelligent Router:

    • Answer: Supervisor Agent Brick.

    • Concept: The "Manager." It listens to your request and decides which "specialist" (sub-agent) is the right one to fix it.

  • 10. The "Brilliant Intern" Analogy:

    • Answer: Highly knowledgeable but takes instructions extremely literally and lacks specific business context.

    • Concept: You have to be specific. It’s smart, but it doesn't know your company's "unspoken" rules yet.


Evaluation & Bias (Question 6)

How we measure if the AI is actually doing a good job.

  • 6. LLM-as-a-Judge (The "Con"):

    • Answer: It may exhibit "verbosity bias," favoring longer responses regardless of accuracy.

    • Concept: AI judges often fall for "fluff." They might give a higher grade to a long, poetic answer than a short, 100% correct one.


Quick Reference Comparison

FeatureStandard GenAIAI Agent
WorkflowSingle-turn (Input $\rightarrow$ Output)Multi-step (Plan $\rightarrow$ Tool $\rightarrow$ Result)
MemoryContext WindowContext + Long-term "Memory" storage
Data AccessTraining Data (Static)RAG / Grounding (Real-time)
LogicPattern RecognitionIterative Reasoning