Your Role | Your Profile | Our Offer | The Benefits | The Process | Apply now!
At lemon.markets data is a huge part of what we are offering to our customers. Real-time stock market data, directly streamed from the stock exchange, historical-data sets aggregated on different time frames and many more things. Therefore, the core driver in our infrastructure are Apache Kafka and other scalable connectors around the Kafka ecosystem to serve our customers with all the data they need.
We typically process 25000+ messages per second with Kafka on a normal day, hence building a reliable and scalable infrastructure for low latency, able to deal with peaks is a core part to achieve our mission.
As described we not only stream the data, but also want to aggregate and store. So we have to combine multiple connectors with the core pipeline, e.g. for persisting the entire data in scalable databases or streaming data through websockets to our customers.
You will also help us in building, maintaining and improving a data bus for internal purposes which are event-driven services. For instance, if you wire money to lemon.markets, an event is triggered, sent through our data bus Kafka and on the other end received by another service processing and using it. As you might have guessed already, processing customers money needs more effort on the infrastructure end, so that will be also one of your daily challenges.
You will always work closely together with our backend and AWS infrastructure team, as you are building the important bridge between our customers and partners and maintain our most critical infrastructure.
- You have at least 3y experience as a Data Infrastructure Engineer. Kafka, Kafka, Kafka! Experience in working with streaming technologies such as Kafka, KStreams or Flink + curiosity to explore the hottest upcoming tools like Redpanda with us. Furthermore, experience in highly scalable databases (not the standard SQL stuff) is a big plus. You do not need to know all the databases we are using/want to use, but skills how to choose the right database given our requirements are needed. Sufficient Python skills are a nice to have, so you can test out databases and Kafka connectors with your own code.
- You are experienced with building in the cloud (AWS). You are familiar with the managed services for building streaming services & databases. And keen to build out, maintain and improve our infrastructure for event-driven services.
- You have advanced database experience. You are being used to work with high amounts of data (multiple Terabyte), high throughput (tens of millions database operations per hour) and other custom requirements (like ultra low latency or performing complex database queries).
- You have no problem with learning new technology on the go. You try to solve problems with more things than the ones you already know. We will always help you with that, but your willingness has to come first.
- You have been part of an (early stage) startup or a similar lean organization before. Working in an early stage startup can be somehow frightened, overwhelming and amazing at the same time. We want you to have experienced this before.
- You enjoy building a product from scratch. This means high involvement in the product process and understanding what the customer really needs. It also means that you enjoy taking technical decisions.
- You should not want to over-engineer. Hence, we have to meet our requirements regarding security, latency and uptime. You should always find the right sweet spot between over-engineering and our requirements.
—> We are more than happy to be proven wrong. Simply apply if the job sounds good to you and we will have an honest conversation about you and the way you think about infrastructure engineering.
- Salary based on your experience. Fair cash compensation, adjustable with your appetite for stock options.
- Mandatory stock options program. We want you to be a true owner for the long term. ****