Data Engineer
Dallas, TX, United States
Data Engineer - Must be a US citizen or Green Card Holde r
Contract:
6 months initial and full-time after 6 months
Rate:
$70-$90 per hour
Location:
Dallas 2 days on-site per week
We are seeking a skilled Data Engineer with extensive experience in DBT, Snowflake, and Azure, along with proficiency in Python and SQL. The ideal candidate will have a strong understanding of data modeling principles and a proven track record of developing robust data pipelines and architectures.
Responsibilities:
Data Pipeline Development:
Design, build, and maintain efficient and scalable data pipelines using DBT, Snowflake, and Azure services. Develop ETL processes to extract, transform, and load data from various sources into the data warehouse.
DBT Implementation:
Utilize DBT (Data Build Tool) for modeling, testing, and deploying data transformations. Design and implement DBT models to streamline data transformation workflows and ensure data accuracy and consistency.
Snowflake Management:
Manage Snowflake data warehouse environments, including provisioning, configuration, optimization, and monitoring. Implement best practices for data warehouse performance, security, and scalability.
Azure Integration:
Work with Azure services such as Azure Data Factory, Azure Databricks, and Azure SQL Database for data integration, processing, and storage. Collaborate with Azure cloud architects and administrators to ensure seamless integration with existing Azure infrastructure.
Data Modeling:
Collaborate with data architects and analysts to design and implement data models that meet business requirements. Develop a deep understanding of the underlying data and business processes to create effective data models that support analytics and reporting needs.
Performance Optimization:
Identify and address performance bottlenecks in data pipelines and SQL queries. Optimize data processing and query performance to ensure timely and efficient data delivery to end-users.
Documentation and Collaboration:
Document data pipelines, schemas, and processes to ensure knowledge sharing and maintainability. Collaborate with cross-functional teams including data scientists, analysts, and software engineers to support data-driven initiatives and projects.
Continuous Improvement:
Stay updated on emerging technologies, tools, and best practices in data engineering. Proactively identify opportunities for process improvement and optimization to enhance data reliability, quality, and accessibility.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred.
Proven experience as a Data Engineer with a focus on DBT, Snowflake, and Azure.
Strong proficiency in Python and SQL for data manipulation and scripting.
Solid understanding of data modeling concepts and techniques.
Experience building and optimizing data pipelines for large-scale data processing.
Familiarity with cloud computing platforms and services, particularly Azure.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively.
#J-18808-Ljbffr