Back to all jobs
Abbott logo

Data Engineer

Abbott

United States, United States, U.S.
Full-time, Remote
Posted Feb 25, 2026
Remote

Compensation

Loading salary analysis...

About the role

The candidate will be responsible for big data engineering, data wrangling, and data analysis in the Cloud. The role will also contribute to defining and implementing Big Data Strategy for the organization along with driving the implementation of IT solutions for the business.

Responsibilities

  • Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives
  • Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services
  • Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3
  • Integrate and assemble large, complex data sets that meet a broad range of business requirements
  • Read, extract, transform, stage and load data to selected tools and frameworks as required and requested
  • Customizing and managing integration tools, databases, warehouses, and analytical systems
  • Process unstructured data into a form suitable for analysis and assist in analysis of the processed data
  • Working directly with the technology and engineering teams to integrate data processing and business objectives
  • Monitoring and optimizing data performance, uptime, and scale; Maintaining high standards of code quality and thoughtful design
  • Create software architecture and design documentation for the supported solutions and overall best practices and patterns
  • Support team with technical planning, design, and code reviews including peer code reviews
  • Provide Architecture and Technical Knowledge training and support for the solution groups
  • Develop good working relations with the other solution teams and groups, such as Engineering, Marketing, Product, Test, QA
  • Stay current with emerging trends, making recommendations as needed to help the organization innovate

Requirements

  • Bachelors Degree in Computer Science, Information Technology or other relevant field
  • At least 1-3 years of recent experience in Software Engineering, Data Engineering or Big Data
  • Ability to work effectively within a team in a fast-paced changing environment
  • Knowledge of or direct experience with Databricks and/or Spark
  • Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives
  • Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources
  • Knowledge of data cleaning, wrangling, visualization and reporting
  • Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience
  • Familiarity of databases, BI applications, data quality and performance tuning
  • Excellent written, verbal and listening communication skills
  • Comfortable working asynchronously with a distributed team

Benefits

  • Free medical coverage in our Health Investment Plan (HIP) PPO medical plan
  • An excellent retirement savings plan with high employer contribution
  • Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit
  • FreeU education benefit - an affordable and convenient path to getting a bachelor’s degree

About the Company

Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines.

Job Details

Salary Range

Salary not disclosed

Location

United States, United States, U.S.

Employment Type

Full-time, Remote

Original Posting

View on company website
Create resume for this position