JOB DESCRIPTION: SAIC is seeking a Sr. Data Engineer to perform data model design, data formatting, and ETL development optimized for efficient storage, access, and computation to serve various use cases. You will work closely with data scientists, software developers, and leadership to understand use cases and requirements, then leverage appropriate tools and resources to achieve desired customer deliverables. You will explore data from various sources; develop new tools, code, and services to execute data engineering activities; develop new, and modify existing data models; and write code for ETL processes in a fast-paced environment.
- Movement of structure and unstructured data (gigabyte to terabyte range) using Sponsor approved methods.
- Execute data ingestion activities for storing data in a local or enterprise level (Integrated Data Layer) location.
- Acquire data from multiple data sources and maintain resulting databases, data warehouses, and/or data lakes.
- View data in its source format.
- Design, develop, and manage API connections from multiple data sources.
- Develop code to format data that facilitates exploration.
- Analyze source data formats and work with Data Scientists and Mission Partners to determine the formats and transformations that best meet mission objectives.
- Design and develop code and tools to provide one-time and on-going data formatting and transformations into enterprise or boutique data models.
- Design and implement new or existing ETL code and best practices/standards that are currently in use in the enterprise.
- Manage existing databases, data warehouses, and data lakes to ensure proper data connectivity is maintained, making updates as necessary.
- Develop an ETL Code Transition Plan when the Sponsor identifies a specific project. Projects will be identified periodically.
- Develop and deliver Software Documentation for each code project that includes ETL mappings, code use guide, code location (generally BitBucket) with access instructions, and anomalies encountered.
- Experience working in a fast-paced environment
- Able to adapt in an environment with customer-directed changes
TYPICAL EDUCATION AND EXPERIENCE:
- Bachelor’s Degree or equivalent years and 9 + years of experience. Master’s Degree and 7 + years of experience.
- ETL or database certification (Oracle, Microsoft, IBM, or similar) strongly preferred
- Experience with cloud services (Google, Amazon Web Services (AWS), and others)
- Experience working and developing capabilities on Linux and Windows
- Experience working with Big Data tools (Spark, Hadoop, and other software).
- Experience developing, testing, and maintaining Python programs as packages or notebooks
- Experience developing and maintaining data processing flows
- Experience working with and maintaining SQL database systems, particularly Microsoft SQL Server, PostgreSQL, and MySQL
- Experience working in NoSQL systems such as MongoDB, SOLR, and key value data stores
- Experience developing Data pipelines and automation
- Experience with API development and management using tools such as Postman
- Experience working with diverse data types including text, image, video, audio, and binary files
Experience developing and maintaining dashboards for users to engage with data
- Certification from a Cloud system provider (Google, Amazon Web Services, or similar)
- Familiarity with collaboration tools:
- Atlassian JIRA & Confluence
- MS Teams
- Familiarity with statistical tools such as Alteryx, Python, and R
- Experience formatting data for use in visualization tools (Tableau, Power BI, or others)
Clearance Requirement: Candidate must have the ability to obtain/maintain a Secret clearance during their employment