For IndividualsFor Educators
    ExpertMinds LogoExpertMinds
    HomeGCP PCAHow to Tackle a GCP PCA Case Study Question
    WalkthroughGoogle CloudGCP PCA

    How to Tackle a GCP PCA Case Study Question

    ExpertMinds Editorial·14 April 2026·6 min read

    A step-by-step walkthrough of the GCP Professional Cloud Architect case study format — how to read a scenario, satisfy multiple competing requirements, and avoid the trap of partial answers.

    The GCP PCA exam includes case studies that every candidate sees — Mountkirk Games, Dress4Win, TerramEarth, and EHR Healthcare. These are described in the exam guide and you can study them in advance. Questions based on these cases ask you to apply your knowledge of GCP architecture to specific requirements within the scenario. A partial answer — one that satisfies three of four requirements — is a wrong answer.

    Choose ONE answer.

    TerramEarth manufactures heavy equipment and collects telemetry data from 500,000 vehicles globally. The data is streamed in real time from vehicles with connectivity, and uploaded in batch from vehicles in remote areas. The company wants to predict when equipment will need maintenance before it fails. The solution must handle both real-time streams and batch uploads, scale to millions of vehicles, and make predictions available to field technicians within 30 minutes of data arrival. Which architecture best meets TerramEarth's requirements?

    [1 mark]

    1

    Extract all requirements before reading options

    From the scenario, extract four explicit requirements: 1. Handle real-time streaming data 2. Handle batch uploads from remote areas 3. Scale to millions of vehicles (petabyte-scale data) 4. Predictions available within 30 minutes of arrival Write these as a checklist. Any option that fails even one is eliminated.

    2

    Map requirements to GCP service patterns

    Before looking at options, sketch what an architecture that satisfies all four requirements would look like: • Real-time + batch ingestion → Pub/Sub handles both (real-time publishing; batch processors can publish batches too) • Scalable processing → Dataflow (Apache Beam) handles both streaming and batch in one pipeline • Petabyte-scale storage → Bigtable for time-series data, BigQuery for analytics • 30-minute prediction latency → requires pre-trained ML model that can score new data on arrival; Vertex AI for online prediction

    3

    Eliminate options that fail any requirement

    Typical wrong options: • "Use Cloud SQL to store all telemetry data, then run a daily batch prediction job" — Cloud SQL does not scale to petabytes; daily batch does not meet 30-minute SLA. Two failures. Eliminate. • "Use Pub/Sub and Dataflow for streaming, but process batch uploads separately with Dataproc, storing results in BigQuery" — technically valid but more complex than necessary. Dataproc (Hadoop/Spark) is for migrating existing Spark jobs — Dataflow handles both streaming and batch in a unified pipeline. Not wrong exactly, but not the best GCP-native answer. • "Use Pub/Sub, Dataflow, Bigtable, and Vertex AI online prediction" — handles real-time and batch through Pub/Sub, processes with Dataflow, stores in Bigtable, predictions via Vertex AI. Satisfies all four requirements. This is the correct answer shape.

    4

    Verify the answer against every constraint

    Pub/Sub → Dataflow → Bigtable + Vertex AI: ✓ Real-time: Pub/Sub ingests streaming data instantly ✓ Batch: Remote vehicles can publish batches to Pub/Sub when connected ✓ Petabyte scale: Bigtable handles petabyte-scale time-series; Dataflow auto-scales ✓ 30-minute latency: Dataflow processes as data arrives; Vertex AI online prediction returns results in milliseconds All four constraints satisfied. Confirm.

    Key fact:The TerramEarth pattern — Pub/Sub + Dataflow + Bigtable + Vertex AI — is worth memorising. It appears in multiple variations. Any question about IoT data + prediction + scale will have this architecture as the correct answer or as the foundation of the correct answer.

    Case study reading strategy

    • Read the full case study description first — before any questions. You are allowed to do this.
    • Identify: (1) the current pain point, (2) the scaling requirement, (3) the cost constraint, (4) the compliance or security requirement. Most cases have all four.
    • For each question, re-read the specific requirement being asked — do not rely on memory of the case. The question may reference a specific technical detail.
    • Eliminate before confirming — narrow to two options, then find the specific flaw in the wrong one.
    • If an option is "architecturally valid but not GCP-native" (e.g. self-managed Kafka instead of Pub/Sub) — it is almost always wrong. The exam rewards GCP-native managed services.

    Practice GCP PCA

    Exam simulation, graded results, and detailed guidance on every question.

    Sign up freePractice now

    GCP PCA at a glance

    50 questions · 120 minutes · 2-year validity

    Pass mark: Not published by Google

    Apply this in practice →

    Graded, with guidance and explanations on every answer.

    Sign up free

    More from GCP PCA

    How the GCP Professional Cloud Architect Exam Changed for 2026: Vertex AI, Compute Strategy, and GenAI

    5 min read

    ExpertMinds

    Ace your certifications with Practice Exams and AI assistance.

    • Browse Exams
    • For Educators
    • Blog
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    • Support
    • AWS SAA Exam Prep
    • PMI PMP Exam Prep
    • CPA Exam Prep
    • GCP PCA Exam Prep

    © 2026 TinyHive Labs. Company number 16262776.