Senior Data Engineer
Austin, TX, United States
Job Title: Senior Data Engineer
Job ID: 2022-11506
Job Location: Bloomfield, CT, Denver, CO, Austin TX, NYC, NY, virtual work is possible for the right candidates
Job Travel Location(s):
# Positions: 3
Employment Type: W2
Candidate Constraints:
Duration: Long Term
# of Layers:0
Work Eligibility:All Work Authorizations are Permitted – No Visa Transfers
Key Technology:cloud, OO language, database systems, devops, machine learning
Job Responsibilities:
The Senior Data Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Senior Data Engineer, among others, is Ownership & Accountability.
In addition to Delivery, the Senior Data Engineer should have an automation first and continuous improvement mindset. They should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes.
Behaviors of a Full Stack Engineer:
Senior Data Engineer is able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineer – not institutionalized developers.
Key Characteristics and duties you will perform:
Analyze source data and data flows, working with structured and unstructured data.
Build data pipelines to extract, transform, crunch and store data in various target systems
Write reference-able & modular code
Design and architect the solution independently
Have a passion to learn, Take ownership and accountability
Deep desire for automation using devops practices and toolsets
Have a desire to simplify, Be entrepreneurial / business minded
Skills and Experience Required:
Mandatory:
2+ years being part of Agile teams – Scrum or Kanban
2+ years of solution architecting in cloud technology (eg. AWS, Azure, google cloud)
6+ years of working in an object-oriented language: Java or Scala and Python
3+ years of working experience in Apache Spark, PySpark, Apache Airflow
3+ years of database systems like Hadoop, Teradata, Redshift, Oracle, MySQL, PostgreSQL, MongoDB, Neo4j, Cassandra
3+ years of devops tool chain: Jenkins, Maven, Artifactory, Docker, Terraform, Ansible
3+ years of experience in AWS Glue, SNS, SQS, S3, ECS, LAMBDA
2+ years of working experience with operationalizing Machine learning models
Desired:
Understanding of Machine learning frame works (e.g. Scikit learning, Scipy)
Understanding and 1+ yrs of working experience with Deep learning frameworks
Education:
Minimum degree required for position (bachelor’s degree):
Acceptable Major(s) or Field(s) of Study: Data Science, Computer Science or any related field
Skills/Certifications/Licenses:
Nice to have AWS solution architecture and developer certification
Nice to have data science and Machine learning certification from a reputed institution
#J-18808-Ljbffr