GCP-PCDE: Google Professional Cloud DevOps Engineer

A Professional Cloud DevOps Engineer is responsible for efficient development operations that can balance service reliability and delivery speed. They are skilled at using Google Cloud to build software delivery pipelines, deploy and monitor services, and manage and learn from incidents.

Try Online Exam

The Professional Cloud DevOps Engineer exam assesses your ability to:

  • Apply site reliability engineering principles to a service
  • Optimize service performance
  • Implement service monitoring strategies
  • Build and implement CI/CD pipelines for a service
  • Manage service incidents

Google GCP-PCDE Exam Summary:

Exam Name Google Professional Cloud DevOps Engineer (GCP-PCDE)
Exam Code GCP-PCDE
Exam Price $200 USD
Duration 120 minutes 
Number of Questions   50 
Passing Score Pass / Fail (Approx 70%)
Recommended Training / Books Google Cloud documentation
Google Cloud solutions 
Sample Questions Google GCP-PCDE Sample Questions
Recommended Practice  Google Cloud Platform – Professional Cloud DevOps Engineer (GCP-PCDE) Practice Test

Google GCP-PCDE Syllabus:

Section Objectives
Applying site reliability engineering principles to a service
Balance change, velocity, and reliability of the service: – Discover SLIs (e.g., availability, latency)
– Define SLOs and understand SLAs
– Agree to consequences of not meeting the error budget
– Construct feedback loops to decide what to build next
– Eliminate toil via automation
Manage service life cycle: – Manage a service (e.g., introduce a new service, deploy, maintain, and retire it)
– Plan for capacity (e.g., quotas and limits management)
Ensure healthy communication and collaboration for operations: – Prevent burnout (e.g., set up automation processes to prevent burnout)
– Foster a learning culture
– Foster a culture of blamelessness
Building and implementing CI/CD pipelines for a service
Design CI/CD pipelines: – Creating and storing immutable artifacts with Artifact Registry
– Deployment strategies with Cloud Build and Spinnaker
– Deployment to hybrid and multicloud environments with Anthos, Spinnaker, and Kubernetes
– Artifact versioning strategy with Cloud Build and Artifact Registry
– CI/CD pipeline triggers with Cloud Source Repositories, external SCM, and Pub/Sub
– Testing a new version with Spinnaker
– Configuring deployment processes (e.g., approval flows)
Implement CI/CD pipelines: – CI with Cloud Build
– CD with Cloud Build
– Open source tooling (e.g., Jenkins, Spinnaker, GitLab, Concourse)
– Auditing and tracing of deployments (e.g., CSR, Artifact Registry, Cloud Build, Cloud Audit Logs)
Manage configuration and secrets: – Secure storage methods
– Secret rotation and config changes
Manage infrastructure as code: – Terraform
– Infrastructure code versioning
– Make infrastructure changes safer
– Immutable architecture
Deploy CI/CD tooling: – Centralized tools vs. multiple tools (single vs. multi-tenant)
– Security of CI/CD tooling 
Manage different development environments (e.g., staging, production): – Decide on the number of environments and their purpose
– Create environments dynamically per feature branch with GKE
– Local development environments with Docker, Cloud Code, Skaffold 
Secure the deployment pipeline: – Vulnerability analysis with Artifact Registry
– Binary Authorization
– IAM policies per environment 
Implementing service monitoring strategies
Manage application logs: – Collecting logs from Compute Engine, GKE with Cloud Logging, Fluentd
– Collecting third-party and structured logs with Cloud Logging, Fluentd
– Sending application logs directly to the Cloud Logging API
Manage application metrics with Cloud Monitoring: – Collecting metrics from Compute Engine
– Collecting GKE/Kubernetes metrics
– Use Metrics Explorer for ad hoc metric analysis
Manage Cloud Monitoring platform: – Creating a monitoring dashboard
– Filtering and sharing dashboards
– Configure third-party alerting in Cloud Monitoring (e.g., PagerDuty, Slack)
– Define alerting policies based on SLIs with Cloud Monitoring
– Automate alerting policy definition with Terraform
– Implementing SLO monitoring and alerting with Cloud Monitoring
– Understand Cloud Monitoring integrations (e.g., Grafana, BigQuery)
– Using SIEM tools to analyze audit/flow logs (e.g., Splunk, Datadog)
– Design Cloud Monitoring metrics scopes
Manage Cloud Logging platform: – Enabling data access logs (e.g., Cloud Audit Logs)
– Enabling VPC flow logs
– Viewing logs in the Google Cloud Console
– Using basic vs. advanced logging filters
– Implementing logs-based metrics
– Understanding the logging exclusion vs. logging export
– Selecting the options for logging export
– Implementing a project-level / org-level export
– Viewing export logs in Cloud Storage and BigQuery
– Sending logs to an external logging platform
Implement logging and monitoring access controls: – Set ACL to restrict access to audit logs with IAM, Cloud Logging
– Set ACL to restrict export configuration with IAM, Cloud Logging
– Set ACL to allow metric writing for custom metrics with IAM, Cloud Monitoring
Optimizing service performance
Identify service performance issues: – Evaluate and understand user impact
– Utilize Google Cloud’s operations suite to identify cloud resource utilization
– Utilize Cloud Trace and Cloud Profiler to profile performance characteristics
– Interpret service mesh telemetry
– Troubleshoot issues with the image/OS
– Troubleshoot network issues (e.g., VPC flow logs, firewall logs, latency, view network details)
Debug application code: – Application instrumentation
– Cloud Debugger
– Cloud Logging
– Cloud Trace
– Debugging distributed applications
– App Engine local development server
– Error Reporting
– Cloud Profiler
Optimize resource utilization: – Identify resource costs
– Identify resource utilization levels
– Develop plan to optimize areas of greatest cost or lowest utilization
– Manage preemptible VMs
– Utilize committed use discounts where appropriate
– TCO considerations (e.g., security, logging, networking)
– Consider network pricing
Managing service incidents
Coordinate roles and implement communication channels during a service incident: – Define roles (incident commander, communication lead, operations lead)
– Handle requests for impact assessment
– Provide regular status updates, internal and external
– Record major changes in incident state (e.g., When mitigated? When is all clear?)
– Establish communications channels (e.g., email, IRC, Hangouts, Slack, phone)
– Scaling response team and delegation
– Avoid exhaustion / burnout
– Rotate / hand over roles
– Manage stakeholder relationships
Investigate incident symptoms impacting users: – Identify probable causes of service failure
– Evaluate symptoms against probable causes; rank probability of cause based on observed behavior
– Perform investigation to isolate most likely actual cause
– Identify alternatives to mitigate issue
Mitigate incident impact on users: – Roll back release
– Drain / redirect traffic
– Turn off experiment
– Add capacity
Resolve issues with deployments (e.g., Cloud Build, Jenkins): – Code change / fix bug
– Verify fix
– Declare all-clear
Document issue in a postmortem: – Document root causes
– Create and prioritize action items
– Communicate postmortem to stakeholders