Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer. Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale. Build automated validation, replay, and backfill mechanisms for data reliability and recovery. Ensure security, compliance, and best practices for data pipelines and observability platforms. Document data flows, schemas, dashboards, and operational runbooks. Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream). Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation. Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest. Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient ...
more