Additional Job Information:
Title               : Big Data Engineer
Location    : Atlanta, GA, United StatesÂ
Job Description
Client Corporation is currently seeking an experienced Data Engineer – Big Data individual for their Midtown office in Atlanta, GA. The successful candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions. Join a smart, highly skilled team with a passion for technology, where you will work on our state of the art Big Data Platforms. They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements. The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions and perform data ingestion development and support duties.Â
Skills and Experience:
 Required:
6+ years of overall IT experience
3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming
3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV
3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or
3+ years of experience building, testing, and optimizing ‘Big Data’ data ingestion pipelines, architectures, and data sets
2+ years of experience with Python (and/or Scala) and PySpark/Scala-Spark
3+ years of experience with Cloud platforms e.g. AWS, GCP, etc.
3+ years of experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery
2+ years of experience with NoSQL databases, including HBASE and/or Cassandra
Experience in successfully building and deploying a new data platform on Azure/ AWS
Experience in Azure / AWS Serverless technologies, like, S3, Kinesis/MSK, lambda, and Glue
Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ Series
Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity CatalogÂ
Knowledge of Unix/Linux platform and shell scripting is a must
Strong analytical and problem-solving skills
Â
Preferred (Not Required):Â Â
Strong SQLÂ skills with ability to write intermediate complexity queries
Strong understanding of Relational & Dimensional modelingÂ
Experience with GIT code versioning software
Experience with REST API and Web Services
Good business analyst and requirements gathering/writing skills
Qualification:
Bachelor’s Degree required. Preferably in Information Systems, Computer Science, Electrical Engineering, Computer Information Systems or related field
For Big Data Engineer position:Â
1. Must have hands-on experience with DatabricksÂ
2. Must have hands-on experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming a. Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka b. Deep knowledge of troubleshooting and tuning Spark applicationsÂ
3. Must have hands-on experience with Python and/or Scala i.e. PySpark/Scala-SparkÂ
4.Experience with Traditional ETL tools and Data ModelingÂ
5. Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ SeriesÂ
6. Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity CatalogÂ
7. Experience with data ingestion of different file formats across like JSON, XML, CSVÂ
8. Knowledge of Unix/Linux platform and shell scriptingÂ
9. Experience with Cloud platforms e.g. AWS, GCP, etc. Experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery.
Best Regards,Â
Sharmila Manukonda, BSC
HR Recruiter
Â
Services You Can Trust
M: 404-891-1115
E: sharmila@aaaglobaltech.com | www.aaaglobaltech.comÂ
7000 Peachtree Dunwoody Rd |Â Bldg. 11 , Suite 301Â
Atlanta , GA – 30328Â
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The...
Apply For This JobThis inclusive employer is a member of myGwork – the largest global platform for the LGBTQ+ business community. About the...
Apply For This JobThe Team: Content Externalization team at S&P Ratings responsible for data distribution to both external & internal clients through various...
Apply For This JobSenior Data Engineer (AWS Big Data) (6 Months temporary Opportunity) Organisation/Entity: Transport For NSW Job category: Information & Communication Technology...
Apply For This JobJob Title: Big Data Hadoop Engineer – Only W2 (PST ONLY.) Client ID: CR243 Location: Pleasanton CA Duration: 12+ Months...
Apply For This JobLa empresa Navantia S.A., S.M.E. realiza la siguiente Convocatoria para sus oficinas centrales de Madrid: 1 vacante de Técnico/a Superior...
Apply For This Job