We are looking for a passionate Data Engineer who would like to work with a top-notch technology stack in a friendly and cozy atmosphere for one of the greatest retail companies in the USA.
Why is this company one of the greatest? Founded in 1901, it prospered through two world wars, the Spanish flu, the Great Depression, the 2020 pandemic, and still produces the highest quality products and services for their customers.
The client’s company is based in Seattle with a technology orientation rooted in Silicon Valley. You will work with the cleverest minds, who are eager to build superb services to match business requirements and win market share.
And then there's Zoolatech! Just imagine a workplace and a team environment that you never want to leave once you have found it. Sound enticing? Apply to our position today and we can get you there.
Speaking about the core of Company’s analytics, Client’s Data Technology is pivotal to the Client’s customer experience. As a Data Engineer, you will own the design and build the next generation of Client’s Data & Analytical Services and deliver on the strategic vision of Customer-focused features and services to enhance Client’s “The Customer Service Excellence”.
A Data Engineer is a key part of the Client’s Technology team that applies scientific, mathematical, and social principles to design, build, and maintain technology products, devices, systems, and solutions.
You are the perfect candidate if you have an experience in data transformation and modeling.These are the technologies used in our project: Python, Spark, Kafka, Airflow, GCP BigQuery, DataProc, DataPlex, AWS, TeraData, Tableau
Support the development and evolution of our data warehouse and BI platform.
Partner with the BI Manager, Data and BI Engineers, Program Managers, and Analysts on building a best-in-class suite of tools and reporting mechanisms to bring the most salient, insightful data more directly into key business functions.
Design and implement modernized ETL and data processing solutions through modernized cloud-based solutions (S3, Redshift, etc) and deprecate legacy on-premise solutions (Oracle, etc)
Develop data integration solutions leveraging multiple disparate sources.
Continual performance tuning and capacity planning for future growth potential.
Integrates broad working knowledge in related disciplines to create integrated technical innovations/ solutions for complex business situations
Must have:
3+ years of relevant Data Engineering experience
Classical Data Warehouse, Data structures, Data marked, Data profiling
Experience writing advanced SQL
Experience with one of Python, Java or Scala
ETL
Experience with GCP BigQuery
Basic experience with BigData
Nice to have:
Experience with Spark
Other cloud computing Experience (e.g AWS)
Knowledge of Kafka for event streaming and real-time data pipelines
Familiarity with Kubernetes for container orchestration