Big Data Engineer
Company: Atrilogy Solutions Group, Inc.
Posted on: December 7, 2018
Atrilogy's direct Client is looking for a Big Data Engineer ---in Durham, NC on a 12 month Contract. The successful Candidate for this role will become part of the Center of Excellence for Process
Automation at Fidelity. We provide end-to-end architectural strategies and spearhead innovative solutions for our customers. This includes a very large realm of activities, including but not limited
to: influencing product strategy, developing scalable solutions, partnering with squads in launching valuable and clear solutions to our customers.
Title: Big Data Engineer
Duration: 12+ Months
Location: Durham, NC
As Data Engineer, you will utilize your expertise to have an end-to-end vision, and to see how a logical design will translate into one or more physical databases, and how the data will flow
through the successive stages involved. You have a passion for delivering solutions in a client obsessed environment that will give you opportunities to grow multi-dimensionally.
* Understanding principles, best practices and trade-offs of schema design for both Relational and NoSQL database systems
* Good Understanding of Big Data NoSQL databases/technologies (DynamoDB, Hive, Spark, MongoDB)
* Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker)
* Experience in scripting with Python, Ruby and Unix shell skills.
Like to have:
* Deployment Automation in Private/Public cloud preferably on AWS
* Experience building Data Ingestion on the cloud (using tools like Glue, Apache Sqoop or other vendor products like Talend )
The Expertise We*re Looking For
5+ years* experience in Large Big Data Development and Deployment Automation in Private/Public cloud preferably on AWS
Understanding principles, best practices and trade-offs of schema design for both Relational and NoSQL database systems
Good Understanding of Big Data NoSQL databases/technologies (DynamoDB, Hive, Spark, MongoDB)
Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker)
Define structure, integrate, govern, store, describe, model, and maintain data in the enterprise for accuracy and usage maintaining current state
Support policies and procedures enforced by the data governance committee to ensure best practices of data architecture including accountability, governance, and requirements
Document data inventory and data flow diagrams to determine what can be measured, when and how
Experience building Data Ingestion on the cloud (using tools like Glue, Apache Sqoop or other vendor products like Talend or StreamSets)
The Purpose of Your Role
Determine database structural requirements by analyzing client operations, applications, and programming; reviewing objectives with clients; evaluating current systems;
Develop database solutions by designing proposed system; defining database physical structure and functional capabilities, security, back-up, and recovery specifications.
Maintain database performance by identifying and resolving production and application development problems; calculating optimum values for parameters; evaluating, integrating, and installing
new releases; completing maintenance; answering user questions.
The Skills You Bring
Your Big Data Skills with popular stacks like Hadoop and Spark
Your knowledge of AWS CloudFormation, OpenStack HEAT templates and Terraform
Your expertise in all phases of data modeling, from conceptualization to database optimization.
Your ability to map the systems and interfaces used to manage data, sets standards for data management, analyzes current state and conceives desired future state, and conceives projects
needed to close the gap between current state and future goals
Your desire and aptitude for learning new technologies
Your excellent verbal and written communication skills
The Value You Deliver
Build a strategy to re-invent systems and tools to create a continuous cycle of Innovation
Create data monitoring models for each product and work with our marketing team to create models ahead of new releases
Ability to build data models supporting complex transformation
Identifying and ingesting new data sources and performing feature engineering for integration into models
****For immediate consideration please submit your resume in Word format, along with daytime contact information. LOCAL CANDIDATES ONLY PLEASE unless you are willing to relocate
yourself at your own expense. Client is unable to provide H-1B Visa sponsorship at this time. All submittals will be treated confidentially. Selected candidate may be asked to pass a
comprehensive background, credit and/or drug screening. Principals only, no third parties please.**** - provided by Dice Big Data, NoSQL, Hive, Spark, MongoDB, DynamoDB, DeveOps, CI-CD, Meven, Jenkins, Ansible, Stash, Docker, AWS, Sqoop,
Keywords: Atrilogy Solutions Group, Inc., Durham , Big Data Engineer, Engineering , Durham, North Carolina
Didn't find what you're looking for? Search again!