Permanent General Companies, Inc.
Data Engineer II
At a glance
Location: US-TN-Nashville Map
Posted: 02/07/2020
Closing: 03/06/2020
Degree: 4 Year Degree
Type: Full-Time
Experience: Not Specified
Permanent General Companies, Inc.
Job description

The General is seeking a Data Engineer II to join our company.

Who We Are

We are one of the fastest growing and most exciting companies in the insurance industry to work for today. We’re proud to say we’ve been in the insurance business for over 50 years.

Because of our rapid rate of expansion, we’re looking for the best talent to help The General® bring quality insurance to people across the country. Ask any of our employees and they’ll tell you that life at The General® is a challenging, fast-paced, exciting experience that offers a competitive total rewards package, healthy work/life balance and an exciting working environment.

You’ll gain crucial insights from your potential coworkers who bring different backgrounds and professional experiences to The General®. You’ll also find there are plenty of opportunities for advancement, if you’ve got the ambition. The work culture at The General® is one that celebrates diversity, team spirit, and good old-fashioned hard work.


What You’ll Do

As the Data Engineer II you will collect, store, process and build business intelligence and analytics applications within our data platform. You will leverage open source technologies like Spark, Python, Hadoop and cloud native tools to curate high-quality data sets. You will be responsible for integrating these applications with the architecture used across the organization. You will also establish best practices with respect to data integration, data visualization, schema design, performance and reliability of data processing systems, supporting data quality and enabling convenient access to data for our scientist and business users.  

  • Perform exploratory data analysis to determine which questions can be answered effectively with a given data set. Analyze new (possibly unstructured) data sources to determine what additional value they may bring.
  • Design and develop highly scalable and extensible data pipelines from internal and external sources. 
  • Work on cross-functional teams to design, develop, and deploy data-driven applications and products, particularly within the space of data science. 
  • Prototype emerging technologies involving data ingestion and transformation, distributed file systems, databases and frameworks.  
  • Design, build, and maintain tools to increase the productivity of application development and client facing teams.
  • Partner with business analyst to define, develop, and automate data quality checks. 
  • Design and develop big data applications and data visualization tools.

The Team You’ll Join
We are team of collaborative, eager, tech loving professionals. We are focused on finding new ways to quickly gather and present data to our business. We excel at keeping our skillset refined and up to date with current industry standards. We Make Life Easier® with our customer-oriented approach in helping the business access and interpret data.

Job requirements

Who You Are
You are a deeply curious technologist who has experience applying new technologies and working with large sets of data. You have a highly analytical mindset with previous exposure to data science. You take ownership of your assignments and dive deep into technical challenges with some oversight. You are able to quickly and efficiently shift priorities. You are flexible and adapt to change easily. You value collaboration, learning and experimentation. You are a methodical problem solver who values arriving at a result. You often juggle multiple project simultaneously; multi-tasking comes naturally to you.

You bring a bachelor’s degree in computer science or other related field. You bring in-depth knowledge of SQL. You’re experienced using a variety of data stores (RDBMS, analytic database, scalable document stores). You have hands-on programming experience in Python or Java with an emphasis in building ETL workflows and data-driven solutions. You have experience with big data batch computing tools (Hadoop or Spark) and developing distributed data processing solutions. You are experience with cloud computing platforms (AWS, GCP, Azure).

You have a solid understanding and business acumen in the data rich industries like insurance or financial. You understand data modeling principles, database internals, infrastructure as code and software engineering tools and workflows.

Data Engineer II