Loading...
Share this Job

Senior Data Engineer - Remote Job

Apply now »
Apply now

Apply for Job

Date: Apr 4, 2021

Location: Valencia, CA, US, 91355

Company: Boston Scientific

Additional Locations: (n/a)

 

 

Purpose and Passion • Comprehensive Benefits • Life-Work Integration • Community • Career Growth

At Boston Scientific, you will find a collaborative culture driven by a passion for innovation that keeps us connected on the most essential level. With determination, imagination and a deep caring for human life, we’re solving some of the most important healthcare industry challenges. Together, we’re one global team committed to making a difference in people’s lives around the world. This is a place where you can find a career with meaningful purpose—improving lives through your life’s work.

 

About the role:

Responsible for building, expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and data collection for cross functional teams. Will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

 

Your responsibilities include:

  • Build AWS infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources at a variety of scales (small to big)
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Stay aware of and be able to apply rapid-developing data infrastructure and analytics tools, including automated “infrastructure as code” deployment tools such as Terraform
  • Set up, design and support CI/CD pipelines
  • Lead and/or actively participate in both system and process improvements
  • Identify, design, and implement internal data flow process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Seek to understand business requirements for data/information and establish processes/reporting to support
  • Compile data for analysis and reporting to key stakeholders in support of business processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure and functionality needs
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Use in-depth knowledge of business unit functions and cross-group dependencies/relationships to shape solutions to business problems
  • Build interactive dashboards regarding the health and security of data pipelines and architecture with tools such as AWS CloudWatch and AWS ElasticSearch
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions, following Cloud Security best practices
  • Perform root cause analysis on internal and external data pipeline and process problems to answer specific business questions and identify opportunities for improvement
  • Apply comprehensive knowledge of data engineering to resolve complex issues in creative ways
  • Support ad hoc requests and special projects as required

 

What we’re looking for:

Qualifications:

  • Bachelor’s degree in computer science or related
  • 5+ years related experience
  • A seasoned, experienced professional with wide-ranging experience and expertise in data engineering
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Experience building and optimizing data pipelines, architectures and data sets at a variety of data scales and velocities
  • Working knowledge of message queuing, stream processing, and highly scalable data stores
  • Advanced working SQL knowledge and query authoring ability, experience working with relational databases, as well as working familiarity with a variety of databases
  • Strong project management and organizational skills
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • Experience in developing, creating and maintaining dashboards
  • Demonstrated experience working cross functionally with IT/IS teams, external software development vendors and research partners
  • Experience with deployment and support of data products in full production
  • Experience working within an Agile project management framework (Scrum), and related tools (JIRA)
  • Demonstrated ability to work with remote and distributed teams
  • Is recognized as an expert in the work group
  • Comfortable with analysis of situations or data that require an in-depth evaluation of various factors
  • Exercises judgment within broadly defined practices and policies in selecting specific methods, techniques and evaluation criteria for obtaining results
  • Has specialized knowledge of various alternatives and their impact on the business
  • Medical devices experience preferred

Experience using the following software/frameworks/tools:

  • Big data batch and stream processing tools: Hadoop, Spark, Kafka, Storm, Flink, Kinesis, AWS EMR etc.
  • Relational SQL databases: PostgreSQL, MySQL, MS SQL, AWS RDS, Amazon Redshift, Redshift Spectrum, DMS, Athena, etc.
  • No-SQL databases: Redis, MongoDB, DynamoDB, ElastiCache, etc.
  • Analytical/visualization tools: AWS QuickSight, Qlik, Tableau, and PowerBI
  • Data pipeline and workflow management tools: AWS Glue, AWS Lake Formation, AWS Step Functions
  • AWS cloud services: S3, Lambda, CloudWatch, CloudTrail, Elasticsearch, EC2, IAM,  SQS, SNS, Cognito, API Gateway and VPC/networking tools
  • Object-oriented, functional, and scripting languages: Python (Preferred), Java, C++, Scala, Bash, etc.
  • Integrating AWS SDK and AWS CDK into custom applications, scripts
  • Developing and maintaining applications running on Docker, ECS, and EKS
  • Automated infrastructure deployment and management tools: Terraform, AWS CloudFormation, etc.
  • Software development tools and platforms: GitLab, AWS DevOps resources
  • Excellent working knowledge of Git

 

Preferred certifications:

  • AWS Solutions Architect, AWS Data Analytics, Databricks Certified Developer for Apache Spark, Cloudera Spark and Hadoop Developer

 

About us

As a global medical technology leader for more than 35 years, our mission at Boston Scientific (NYSE: BSX) is to transform lives through innovative medical solutions that improve the health of patients. If you’re looking to truly make a difference to people both around the world and around the corner, there’s no better place to make it happen.

 

Requisition ID: 474712

 


Nearest Major Market: Los Angeles

Job Segment: Database, Medical, Engineer, Medical Technology, Computer Science, Technology, Healthcare, Engineering