For IndividualsFor Educators
ExpertMinds LogoExpertMinds
ExpertMinds

Ace your certifications with Practice Exams and AI assistance.

  • Browse Exams
  • For Educators
  • Blog
  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Support
  • AWS SAA Exam Prep
  • PMI PMP Exam Prep
  • CPA Exam Prep
  • GCP PCA Exam Prep

© 2026 TinyHive Labs. Company number 16262776.

    PracticeGCP Professional Cloud ArchitectGCP Professional Cloud Architect Practice Exam 4Question 19
    Hard1 markMultiple Choice
    Domain 5: Managing Implementation and Ensuring Solution and Operations ReliabilityVertex AIMLOpsPredictive Maintenance
    This question is part of a case study — click to read the full scenario(Case 16)

    CASE STUDY: AutoIoT

    Overview: Connected car manufacturer. 1M vehicles sending telemetry every 5 seconds.
    Business: Predictive maintenance alerts, real-time fleet tracking, monetize anonymized data.
    Executives:

    • CEO: "Leverage AI to predict failures."
    • CTO: "Current MQTT brokers crashing. Need fully managed, scalable ingestion."
    • DPO: "Vehicle location is sensitive. Strip PII before analytics."
      Tech: Ingest millions of msgs/sec, real-time stream processing for anomalies, store raw data for ML, sub-second queries for dashboards.
      Constraints: Vehicles lose connection and send late batch data. ML models updated weekly. Strict analytics budget.

    Which architecture should you design for the data ingestion and processing layer to replace the crashing MQTT brokers?

    View full case study page →

    GCP PCA · Question 19 · Domain 5: Managing Implementation and Ensuring Solution and Operations Reliability

    CASE STUDY: AutoIoT

    Overview: Connected car manufacturer. 1M vehicles sending telemetry every 5 seconds.
    Business: Predictive maintenance alerts, real-time fleet tracking, monetize anonymized data.
    Executives:

    • CEO: "Leverage AI to predict failures."
    • CTO: "Current MQTT brokers crashing. Need fully managed, scalable ingestion."
    • DPO: "Vehicle location is sensitive. Strip PII before analytics."
      Tech: Ingest millions of msgs/sec, real-time stream processing for anomalies, store raw data for ML, sub-second queries for dashboards.
      Constraints: Vehicles lose connection and send late batch data. ML models updated weekly. Strict analytics budget.

    To meet the CEO's requirement for predictive maintenance, how should you orchestrate the weekly ML model training pipeline?

    Answer options:

    A.

    Write a bash script on a Compute Engine instance that runs via cron to train the model.

    B.

    Use Cloud Build to train the ML model and deploy it to Cloud Run.

    C.

    Use Vertex AI Pipelines to orchestrate the extraction of data from BigQuery, model training, and deployment to a Vertex AI Endpoint.

    D.

    Use Dataflow to train the machine learning model in real-time.

    How to approach this question

    Identify the primary GCP service for MLOps and pipeline orchestration.

    Full Answer

    C.Use Vertex AI Pipelines to orchestrate the extraction of data from BigQuery, model training, and deployment to a Vertex AI Endpoint.✓ Correct
    Use Vertex AI Pipelines to orchestrate the extraction of data from BigQuery, model training, and deployment to a Vertex AI Endpoint.
    Vertex AI is Google Cloud's unified platform for machine learning. Vertex AI Pipelines allows you to build, run, and manage ML workflows (MLOps). It is the perfect tool to automate the weekly extraction of historical telemetry data, retrain the predictive maintenance model, and deploy the updated model to an endpoint for inference.

    Common mistakes

    Confusing CI/CD tools (Cloud Build) with MLOps tools (Vertex AI Pipelines).
    Question 18All questionsQuestion 20

    Practice the full GCP Professional Cloud Architect Practice Exam 4

    50 questions · hints · full answers · grading

    Sign up freeTake the exam

    More questions from this exam

    Q01CASE STUDY: TechStream Gaming Overview: 500 employees, $100M revenue. On-prem US/EU, 200 servers...MediumQ02CASE STUDY: TechStream Gaming Overview: 500 employees, $100M revenue. On-prem US/EU, 200 servers...MediumQ03CASE STUDY: TechStream Gaming Overview: 500 employees, $100M revenue. On-prem US/EU, 200 servers...HardQ04CASE STUDY: TechStream Gaming Overview: 500 employees, $100M revenue. On-prem US/EU, 200 servers...HardQ05CASE STUDY: TechStream Gaming Overview: 500 employees, $100M revenue. On-prem US/EU, 200 servers...Easy
    View all 50 questions →