Register  |  Log In  |  Contact Us

Streaming Data Engineer – London Innovation Lab

Contract Type
Banking & Finance
Expiry Date
Streaming Data Developer

Job Description


We are looking for a Streaming Data Developer to join our team creating new streaming applications for financial data. You will work closely with product owners and team members to deliver solutions through new and existing channels.


As a Streaming Data Engineer you will be responsible for the reliable, scalable deployment of the Kafka cluster. You will build and integrate with other systems for provisioning, monitoring, and alerting. Working closely with our support teams and development teams, you will help deliver this project with a high level of automation and reliability. We want to minimize manual processes and strong software engineering proficiency in this role is key.


This is an exciting opportunity to work on a greenfield project with cutting-edge technology, which will have huge impact on our future technology architecture.


We believe the future’s here. Right here with us. Home to where we define, ideate, develop and distribute production-ready financial solutions of far-reaching impact. And right now, the door’s open to direct the future of our technology for a truly global client base. This means collaborating with the keenest minds in data science, big data, software engineering, web development, UX design and more. Doers looking to bring the next bold ideas to life for a fascinating array of clients - investing, trading and transacting at the forefront of change in markets and economies the world over.


If you have this kind of vision, capable of seeing ahead, of developing a clear path forward in a quest to try the as yet untried, here is the opportunity. In a supported, resource-rich, vibrant co-working environment, part of an ecosystem of globally interconnected labs, realising a broader mission of enabling growth and economic progress on a scale you won’t find anywhere else. Welcome to our London Innovation Lab.


Key Responsibilities:

  • Deep knowledge of Apache Kafka (Streaming, KSQL, Connect Platform)
  • Build high performance processing pipelines to collect, cleanse and shape data for machine learning applications
  • Work closely with product owners to develop client solutions for new and existing channels
  • Produce real-time, data-intensive systems



  • You are passionate about databases and worked in the past with SQL\NoSQL technologies (SQL Server, Oracle, Couchbase, Mongo DB, etc.)
  • You have experience deploying fault-tolerant distributed systems on Linux, possibly in a cloud environment
  • You have a great understanding of Kafka and other stream processing technologies like Spark or Storm
  • You understand how to use data systems within Big Data space (like Hadoop ecosystem)
  • You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment
  • You have proven capability to interact with clients and deliver results, taking ideas to production
  • You are an adaptable, resourceful, well organised team player with a strong work ethic
  • You have Experience working in fast paced development environments
  • You agree that verbal and written communication skills are vital
  • You are educated to degree level or above


Valuing Diversity:

Demonstrates an appreciation of a diverse workforce. Appreciates differences in style or perspective and uses differences to add value to decisions or actions and organisational success.



Citi is an Equal Opportunities Employer