What Cloud/Data Engineering contributes to Cardinal Health
Works as part of a team in the Cloud Analytics Services space providing services to our analytics community.
- Cloud/Data Engineering Certifications (preferred)
- Experience with Machine Learning (preferred)
- Broad experience in developing cloud migration solutions in either AWS or GCP (GCP preferred)
- Experience developing ARM templates, CloudFormation templates or Terraform (preferred)
- Experience working in an DevOps environment (preferred)
- Support engineering requirements of Legacy / On-Prem enterprise applications and develop cloud migration solutions.
- Troubleshoot and problem solve to remediate issues prohibiting migration or uncovered by migration.
- Support the full project lifecycle - discovery, analysis, architecture, design, documentation, building, migration, automation and production-readiness.
- Collaborative will share information, best practices and experiences with others and will be willing to embrace new and innovative ideas.
- Proven hands-on software development experience with open-source technologies: Java, MySQL, Maven, Git, Jenkins, JUnit, Tomcat.
- Ability to architect a highly available, distributed, and secure system on a cloud platform
- Analyze, design and develop tests and test-automation suites.
- Design and develop a processing platform using various configuration management technologies.
- Provide ongoing maintenance, support and enhancements in existing systems and platforms.
- Collaborate cross-functionally with data scientists, business users, project managers and other engineers to achieve elegant solutions.
- Proficient with SQL
- Develops large scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs.
- Collaborates with data science team to transform data and integrate algorithms and models into automated processes.
- Builds data marts and data models to support Data Science and other internal customers.
- Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards.
- Analyzes current information technology environments to identify and assess critical capabilities and recommend solutions.
- Experiments with available tools and advises on new tools in order to determine optimal solution given the requirements dictated by the model/use case.
- Defines and approves data engineering design patterns to be used for general re-use on multiple implementations
- Help design data models, perform associated data engineering activities to meet business needs
- Proactively manage technical debt incurred during software implementations by identifying opportunities for enhancement (debt repayment), even in tight deadlines
- Experience developing standards in partnership with Engineering, Infrastructure Service, and Application Development to select appropriate technical solutions.
- Experience within healthcare industry and healthcare data
- Experience with multi-threaded, Big Data, and distributive Cloud architectures and frameworks, including using Hadoop, MapReduce, Cloudera, Hive, Spark, and Elasticsearch to conduct Big Data analytics
- Experience with Extract, Transform, and Load (ETL) processes, including document parsing techniques and managing large data sets, such as multi-TB scale deployed environments while adhering to service-level agreements
- Experience writing well-crafted, high-quality, self-documented, pragmatic code
- 3+ years of work experience with ETL, data modeling, and business intelligence big data architectures.
- Experience developing and managing data warehouses on a terabyte or petabyte scale.
- Strong experience in massively parallel processing & columnar databases.
- Experience with Python and shell scripting.
- Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.
- 3+ years' experience with API development in various open source technologies (Spark, Hadoop, R, Apache Beam, Kafka)
- 3+ or more years of experience with open source development tool chain (SVN, Git, Jenkins, etc)
- Experience with production database management and optimization at scale
- Experience with user access, authentication, user permission management and security, LDAP, AD, Kerberos
- Experience with tools such as Jenkins, Artifactory, etc. to build automation, CI/CD, Self-Service pipelines.
- Experience with restful services, service-oriented architecture, distributed systems, cloud system (AWS) and micro-services.
- Experience working with enterprise or cloud messaging systems such as Kafka, PubSub, AWS Kinesis
- Experience with many GCP technologies: Compute Engines, Auto Scaling, Load Balancing, Container Service, object storage, VPC, data pipeline development, BQ.
- Experience with Immutable Infrastructure and Infrastructure as Code patterns and technologies: Docker, Kubernetes, Terraform, Ansible, Packer, Vagrant.
- Participate in sprint planning meetings to contribute with estimations and development strategy
- Drive cost savings with ownership of software designs and architecture
- Learn about business use cases, problems and identify technology solutions
- Collaborate with the appropriate departments to assess and recommend technologies that support company organizational needs.
- Strong verbal and communication skills
- A passion for technology - we are looking for someone who is keen to leverage their existing skills and seek out new skills and solutions.
- Experience with Scrum/Agile development methodologies.
- Capable of delivering on multiple competing priorities with little supervision.
- Intellectual curiosity and a tenacious appetite for troubleshooting is required
- Proven consultative and client relationships management skills
Cardinal Health is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status.