Big Data Specialist - Kafka
To work within a growing Data Intelligence and Engineering Platform team.
Critical role in expanding the roadmap for data platform related developments.
About Our Client
One of the largest financial institutions within the Middle East that are now looking to grow their Data capability in the region.
- Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.
- Knowledge in data architecture, designing workflows, building ingestion framework.
- Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
- Work with both RDBMS and SQL, with established command and working with a variety of databases (both SQL and NoSQL).
- Design recommend best approach suited for data movement from different sources to HDFS using Kafka.
- Building and optimizing 'big data' data pipelines architectures and data sets; both batch-oriented and real time.
- Build strong relationships with the interfacing application development teams, both upstream and downstream as well as other support teams.
- Expertise on NOSQL databases like HBASE, MONGO etc. Hands on with tools such as Spark, Scala, Python & Hive SQL.
The Successful Applicant
- Minimum of 7 years' hands-on experience with a strong data background.
- Hands on experience on major components of Hadoop Ecosystem like HDFS, HIVE, Oozie, Sqoop, Spark and YARN.
- Practical knowledge of data ingestion using ETL tools (Informatica, Talend, etc.) as well as more recent big data tools
- Experience working with cloud-based Analytics Data Warehouse such as Big Query, Data-Proc.
- Extensive experience on streaming processing & analytics using KAFKA, Spark streaming, Striim.
What's on Offer
In addition to a fantastic Big Data Specialist opportunity working with varied projects, this role will pay an attractive salary and offer occasional travel.