Englewood, Colorado, USA
600 days ago
Data warehouse / Big data Architect

Company Description

ClientSolv Technologies is an IT solution firm with over a decade of experience serving Fortune 1000 companies, public sector and small to medium sized companies. ClientSolv Technologies is a woman-owned and operated company that is certified as a WMBE, 8a firm by the Federal government's Small Business Administration.

Job Description

We are seeking a Data Warehouse/Big Data Architect with AWS experience for a contract with option to hire (onsite) in Denver, CO.  This role will be responsible for the following:

Deploy Enterprise data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworksOptimizing data engineering and machine learning pipelinesSupport data and cloud transformation initiativesContribute to our cloud strategy based on prior experienceUnderstand the latest technologies in a rapidly innovative marketplaceIndependently work with all stakeholders across the organization to deliver point and strategic solutionsSupport data and cloud transformation initiativesContribute to our cloud strategy based on prior experienceUnderstand the latest technologies in a rapidly innovative marketplaceIndependently work with all stakeholders across the organization to deliver point and strategic solutions

 

QualificationsShould have prior experience in working as Data warehouse/Bigdata architect.Experience in advanced Apache Spark processing framework, spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting.Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of dataSpecifically, this experience must be in writing Bigdata data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is added advantage.Advanced SQL experience including SQL performance tuning is a must.Should have worked on other big data frame works such as MapReduce, HDFS, Hive/Impala, AWS Athena.Experience in logical & physical table design in Bigdata environment to suite processing frameworksKnowledge of using, setting up and tuning resource management framework such as Yarn, Mesos or standalone spark.Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is requiredShould have knowledge in variety of data platforms such as Redshift, S3, Teradata, Hbase, MySQL/Postgres, MongoDBExperience in AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch and Data pipelineMust have used the technologies for deploying specific solutions in the area of Big Data and Machine learning.Experience in AWS cloud transformation projects are required.

Additional Information

This role will be located onsite in Denver, CO (no remote work option available).  This role will be a contract with option to hire.  

Confirm your E-mail: Send Email