Senior DevOps Engineer
Location:
Wroclaw, Poland
Seniority:
Senior
Technologies:
Devops, Site Reliability

We are looking for a passionate Senior DevOps Engineer who would like to work with a top-notch technology stack in a friendly and cozy atmosphere for one of the greatest retail companies in the USA.

Why is this company one of the greatest? Founded in 1901, it prospered through two world wars, the Spanish flu, the Great Depression, the 2020 pandemic, and still produces the highest quality products and services for their customers.

The client’s company is based in Seattle with a technology orientation rooted in Silicon Valley. You will work with the cleverest minds, who are eager to build superb services to match business requirements and win market share.

And then there's Zoolatech! Just imagine a workplace and a team environment that you never want to leave once you have found it. Sound enticing? Apply to our position today and we can get you there.

Speaking about the core of the Client company’s analytics, Data Technology is pivotal to the Client’s customer experience. As a Senior DevOps Engineer, you will own the design and build the next generation of Data & Analytical Services and deliver on the strategic vision of Customer-focused features and services to enhance the Client’s “Customer Service Excellence”.

Client’s Analytical Platform is a real-time, event-streaming-centric analytical platform that provides high-quality, pre-stitched 360 views of our customers, products, inventory, customer service, fulfillment, logistics and credit. The Insights Delivery Team delivers data and insights that enables Client’s data analysts, data scientists, leadership, store personnel and other business users to drive the critical Client’s customer experiences in a single place near real-time.

A DevOps Engineer is a key part of the Client’s Technology team that applies scientific, mathematical, and social principles to design, build, and maintain technology products, devices, systems, and solutions.

You are the perfect candidate if you have experience in data pipeline automation.

These are the technologies used in our project: AWS, GCP, CI/CD, Terraform, New Relic, Splunk, Kubernetes, Kafka, Data Platforms (Big Query / SnowFlake), Big data platforms (Hadoop/EMR/DataProc)

  • Infrastructure as Code: Design, implement, and maintain infrastructure using Terraform.

    Cloud Platform Expertise: Build and manage scalable, secure, and cost-efficient solutions on AWS and GCP.

  • CI/CD Pipelines: Develop, optimize, and maintain robust CI/CD pipelines to streamline software delivery and deployment processes.

  • Monitoring and Observability: Implement and maintain monitoring, logging, and alerting solutions using tools like New Relic and Splunk to ensure high system availability and performance.

  • Containerization and Orchestration: Manage and deploy applications using Kubernetes, ensuring scalability and reliability of containerized workloads.

  • Event Streaming and Messaging: Work with Kafka to enable real-time data streaming and event-driven architectures.

  • Data Platforms: Collaborate with teams to support and optimize data platforms, including BigQuery or big data platforms like Hadoop/EMR/DataProc.   

  • Cloud Networking and Security: Familiarity with secure networking solutions and enforce cloud security best practices, ensuring data integrity and compliance.

  • Platform Upgrades & Migrations: Lead and execute application upgrades, platform migrations, and infrastructure updates with minimal downtime and impact to business operations.

  • Collaboration: Work closely with development, data engineering, and operations teams to deliver scalable and reliable solutions that meet evolving business needs.

  • On-Call Support: Participate in the on-call rotation to address incidents, troubleshoot issues, and maintain system reliability.

  • 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or a related role.

  • Hands-on experience with AWS, GCP cloud platforms.

  • Expertise in Terraform for infrastructure automation and management.

  • Strong knowledge of CI/CD pipelines and associated tools (e.g., GitHub Actions, GitLab CI/CD).

  • Proficiency in monitoring and logging tools such as New Relic and Splunk.

  • Experience managing containerized applications and orchestration platforms, particularly Kubernetes.

  • Familiarity with Kafka for event-driven architectures and real-time messaging.

  • Experience working with data platforms such as BigQuery, Snowflake, or big data solutions like Hadoop/EMR/DataProc.

  • Solid understanding of cloud networking and security principles, including VPCs, firewalls, IAM, and encryption.

  • Proven ability to lead and execute platform upgrades and migrations with minimal disruption.

  • Excellent troubleshooting and problem-solving skills with a focus on root-cause analysis.

  • Strong communication and collaboration skills to work effectively across teams.

Discover what it’s like to work with us
Join Our Team!
Attaching my CV:
Your message is sent. Thank you for contacting us, we will get in touch with you soon.