Senior Specialist, Data Engineer
EyeBio
Job Description
The Opportunity
- Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare.
- Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products.
- Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats.
Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our companys’ IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy.
A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers.
Role Overview
We are seeking a hands‑on Data Engineer with strong, proven experience in SQL and Python for data processing. This role is not about learning data engineering fundamentals on the job, it is for someone who already applies SQL and Python daily to build, maintain, and evolve reliable data transformation pipelines in production environments.
You will play a critical role in ensuring our data is accurate, well‑structured, and scalable, enabling downstream analytics and business processes across a growing and increasingly complex data landscape.
What will you do in this role
Data Transformation & Engineering
- Design, write, and maintain high‑quality SQL and Python scripts for data transformations across development and production environments.
- Implement DDL and advanced data operations, including data enrichment, cleansing, parsing, normalization, and schema evolution.
- Own and continuously improve existing transformations as data volume, variety, and complexity increase.
- Ensure transformations are readable, testable, and maintainable, enabling other engineers to understand, extend, and support them.
Platform & Pipeline Execution
- Work hands‑on with AWS services such as AWS Glue, Lambda, and Amazon Redshift, or equivalent platforms such as Azure Data Factory.
- Contribute to reliable, automated data pipelines deployed through the company’s Terraform‑based CI/CD setup.
- Collaborate with platform, DevOps, and analytics teams to ensure smooth deployment, monitoring, and operation of data workloads.
- Prior experience with Databricks is not required but considered an advantage.
Engineering Quality & Collaboration
- Apply sound data engineering principles, including version control (Git), code reviews, and structured change management.
- Communicate clearly about data logic, assumptions, and trade‑offs with both technical and non‑technical stakeholders.
- Contribute to shared standards and best practices for SQL, Python, and ETL development.
- Operate with a strong focus on data quality, reliability, and compliance, appropriate for a regulated enterprise environment.
What should you have
- Primarty skill-Data Engineer, AWS, IAC, Data Lakes, Github, Pipeline Management
- Secondary skill-Devops collaboration, general Scripting.
- Bachelors’ degree in Information Technology, Computer Science or any Technology stream.
- 7+ years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context.
- Strong, practical experience with SQL and Python, specifically for data transformation and ETL use cases.
- Demonstrated experience building and maintaining production data pipelines, not just prototypes or academic exercises.
- Solid understanding of relational data models, data normalization, and performance‑aware SQL design.
- Experience working with cloud‑based data platforms (AWS or Azure).
- Hands‑on experience with Git‑based workflows and automated deployment pipelines.
- Ability to reason clearly about data problems and express solutions in clean, maintainable code.
- Experience in regulated or enterprise environments (e.g., healthcare, life sciences, finance).
- Exposure to Databricks, Spark‑based processing, or large‑scale analytical workloads.
- Familiarity with infrastructure‑as‑code concepts (e.g., Terraform) from a user’s perspective.
- Experience supporting data consumers such as analytics, reporting, or data science teams.
- This is a purely hands‑on data engineering role, focused on SQL and Python excellence rather than broad or experimental tooling.
- The environment is intentionally straightforward and disciplined: SQL and Python, versioned in Git, deployed via CI/CD.
- The value of the role comes from clarity of thinking, strong communication, and engineering craftsmanship, not from adopting exotic technologies.
- You will have real ownership over critical data transformations that others depend on and build upon.
Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation.
Who we are
We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world.
What we look for
Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today.
#HYDIT2026
Required Skills:
Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System DesignsPreferred Skills:
Current Employees apply HERE
Current Contingent Workers apply HERE
Search Firm Representatives Please Read Carefully
Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails.
Employee Status:
RegularRelocation:
VISA Sponsorship:
Travel Requirements:
Flexible Work Arrangements:
HybridShift:
Valid Driving License:
Hazardous Material(s):
Job Posting End Date:
03/30/2026*A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date.