Performing in a lead role, architect, design and develop comprehensive data models using Hadoop, Teradata, etc. and data modeling tools like ERWIN and XMLSpy. Evaluate data integration platforms using IEEE and DAR methodologies. Design data solutions for Master Data Management and Data Modernization programs for Government & Civilian Agencies. Design, develop, and create enterprise applications to support Case Management, CRM/ERP, Data Warehousing and Reporting.
Requirements: Master’s degree or equivalent in Computer Science or Information Systems or any Engineering and 12 months’ experience in the job offered or as Software Developer or Engineer, Data Analyst, or a related occupation. Experience must include working with large enterprises and with Hadoop, MSSQL, Teradata, InfoSphere, OBIEE/SSRS, Java/C#. Position based out of company’s headquarters in Sterling, Virginia and subject to relocation to various to client offices throughout U.S.
Refer to Ad# RR2017 when responding. Please send resume to email@example.com OR Human Resources, 43676 Trade Center Plaza, Suite 235, Sterling, VA 20166.
Requirements: Bachelor’s degree or equivalent in Computer Science or related and two years’ experience in the job offered or as Software Engineer or Developer or a related occupation. Experience must include two years with SSIS programming. We will accept the educational equivalent of a bachelor’s degree as evaluated by a credentials evaluation service. Position based out of company’s headquarters in Sterling, Virginia and subject to relocation to various to client sites throughout U.S.
Refer to Ad# RS2017 when responding. Please send resume to firstname.lastname@example.org OR Human Resources, 43676 Trade Center Plaza, Suite 235, Sterling, VA 20166.
Ingest data from various structured data sources into Hadoop and other distributed Big Data systems (HDFS, HBase, Hive, Sqoop). Develop and support the sustainment and delivery of an automated ETL pipeline using Java, and other scripting tools (MapReduce, Python, Pig). Validate data that is extracted from structured data inputs, databases, and other repositories using scripts and other automated capabilities, logs, and queries. Enrich and transform extracted data, as required. Monitor and report the data flow through the ETL process. Perform data extractions, data purges, or data fixes in accordance with current internal procedures and policies. Track development and operational support via user stories and decomposed technical tasks in a provided issue tracking software, including GIT, Maven, and JIRA.
Please email resume to BIGData@mindboard.com.