Cheat Sheet
GCP PCA
GCP Professional Cloud Architect Cheat Sheet 2026
An architecture-level reference for the GCP PCA exam — design patterns, reliability engineering, data pipeline services, security controls, and the Well-Architected decision framework.
ExpertMinds Editorial·10 February 2026·8 min read
The GCP PCA tests architectural judgment at enterprise scale. Every question involves trade-offs between reliability, cost, performance, security, and operational complexity. Unlike the ACE, you are not asked what a service does — you are asked which combination of services and design patterns best solves a business problem with specific constraints.
Key fact:~50 questions · 2 hours · Pass score approximately 70–72%. The exam includes case studies (Mountkirk Games, Dress4Win, TerramEarth, EHR Healthcare) that appear in every sitting.
Architecture Design Principles
| Principle | GCP pattern | Key services |
|---|---|---|
| Reliability — eliminate single points of failure | Multi-region, managed services, health checks | Global Load Balancing, Cloud Spanner, Firestore |
| Scalability — handle growth without re-architecture | Managed auto-scaling, serverless, partitioned databases | Cloud Run, Bigtable, Cloud Spanner, BigQuery |
| Security — least privilege, defence in depth | VPC Service Controls, Workload Identity, CMEK | Cloud KMS, Secret Manager, Binary Authorization |
| Cost — optimise without over-engineering | Right-sizing, committed use, lifecycle policies | Recommender, Committed Use Discounts, Storage lifecycle |
| Operational excellence — automate toil | IaC, CI/CD, observability | Terraform, Cloud Build, Cloud Deploy, Cloud Monitoring |
Data Pipeline Services
| Service | What it does | Exam trigger phrase |
|---|---|---|
| Pub/Sub | Async messaging — producers/consumers decoupled | "decouple", "event streaming", "fan-out", "at-least-once delivery" |
| Dataflow | Apache Beam — stream and batch data processing | "ETL", "real-time processing", "Apache Beam", "unbounded data" |
| Dataproc | Managed Hadoop and Spark clusters | "existing Hadoop/Spark", "lift and shift Spark jobs" |
| Cloud Composer | Managed Apache Airflow — workflow orchestration | "DAG", "workflow scheduling", "complex pipeline dependencies" |
| BigQuery | Serverless analytics data warehouse | "analytics at scale", "SQL on petabytes", "BI dashboards" |
| Looker / Looker Studio | BI and data exploration layer on top of BigQuery | "dashboards", "self-serve analytics", "data visualisation" |
Tip:Dataflow vs Dataproc: Dataflow (Beam) is the preferred serverless choice for new pipelines — no cluster management. Dataproc is for existing Hadoop/Spark workloads you want to migrate with minimal changes.
Security Architecture
| Control | What it does | When to use |
|---|---|---|
| VPC Service Controls | Create security perimeter around GCP services — prevent data exfiltration | Compliance-driven; sensitive data; prevent access from outside perimeter |
| Workload Identity Federation | Allow external workloads (GitHub Actions, AWS) to access GCP without a service account key | CI/CD pipelines; avoid long-lived credentials |
| Customer-Managed Encryption Keys (CMEK) | Bring your own keys managed in Cloud KMS | Regulatory requirement for key control; audit of key usage |
| Binary Authorization | Only allow signed, trusted container images to deploy to GKE/Cloud Run | Supply chain security; prevent untrusted images |
| Cloud Armor | WAF and DDoS protection for HTTP(S) Load Balancer | "protect against SQL injection/XSS", "block by country", "rate limit" |
Reliability Engineering
- SLI (Service Level Indicator): the actual metric — e.g. request latency, error rate, availability
- SLO (Service Level Objective): target for the SLI — e.g. 99.9% of requests complete in under 200ms
- SLA (Service Level Agreement): contractual commitment to customers; SLO is usually stricter than the SLA
- Error budget: 100% minus SLO. If SLO is 99.9%, error budget is 0.1% (~43 min/month). Burn rate = how fast you're using the error budget
- Toil: manual, repetitive, automatable operational work. SRE principle is to keep toil below 50% of team time
- Multi-region HA: use global load balancing with health checks. Back-ends in multiple regions. Failover is automatic.
Practice GCP PCA scenarios
The PCA exam uses case studies with multiple constraints. Practice reading a scenario and satisfying all requirements before answering.
Migration Strategies (6 Rs)
| Strategy | What it means | GCP approach |
|---|---|---|
| Rehost (lift & shift) | Move as-is to cloud VMs | Migrate VMs with Migrate to VMs (formerly Velostrata) |
| Replatform (lift & reshape) | Minor changes to use managed services | Move MySQL to Cloud SQL; move Hadoop to Dataproc |
| Repurchase | Switch to SaaS equivalent | Move email server to Google Workspace |
| Refactor / Re-architect | Redesign for cloud-native | Decompose monolith to Cloud Run microservices |
| Retire | Decommission — no longer needed | Eliminate after audit |
| Retain | Keep on-premises — not ready or not worth migrating | Hybrid with Cloud Interconnect |
Case Study Pattern Recognition
Key fact:Every PCA case study has: (1) a legacy workload to migrate, (2) a scaling or availability requirement, (3) a cost constraint, (4) a compliance/security requirement. Read all four before selecting answers. A solution that meets three out of four is a wrong answer.
- Mountkirk Games: online multiplayer, global players, latency-sensitive → Spanner, GKE, global LB, Agones for game servers
- Dress4Win: e-commerce monolith moving to cloud → Cloud SQL, App Engine or GKE, Cloud Storage, VPN for hybrid phase
- TerramEarth: IoT / heavy equipment telemetry → Pub/Sub, Dataflow, Bigtable, BigQuery, ML for predictive maintenance
- EHR Healthcare: healthcare data, HIPAA compliance → VPC Service Controls, CMEK, Cloud Healthcare API, strict audit logging
Ready to Practice the full GCP PCA?
Graded results, exam simulation, and detailed guidance on every question.
Expert