The beginning
The initial success on the Swedish market brought new branches of Lendo. They opened in Norway and Finland. Each of them took the application source as it was, customized it according to their needs and implemented their own features. Even though the current Lendo application works well, it is not that strong from the technical point of view.
The application was designed as a monolith. It was based on the outdated Zend1 framework and on MySQL. What’s worse, it’s still not scalable at all! All these problems needed a solution. It was the right moment to initialize the cooperation with Schibsted Tech Polska.
The team
The Lendo team in Schibsted Tech Polska was established in October 2015. Our STP team is cooperating closely with the base team in Stockholm.
The main objective of the team is to create a micro-service-based platform, that will cover the functionalities of the existing application and will be fully scalable. The second objective, that came up only months ago, is to merge the functionalities from the three applications from each Lendo branch and to create a common core application matching the requirements of all of them.
While the Polish part of the team has been responsible for developing particular services so far, our colleagues from Stockholm are working hard on the big picture — the architecture of a new platform and its general parts.
Event sourcing for all services
Each of our services is based on event sourcing. That means every action produces an event, which stores every piece of information about what has just happened. These events are serialized and stored in the database we call event storage. Our first implementation of event storage was based on MongoDB, yet in the end, we chose Cassandra.
Every event apart from being saved is also dispatched in the system. We’ve registered event listeners that react to events. These listeners are responsible for updating the read model database, which works as a source of information for queries. Whenever the application receives a query type request, it is using its read model to fill that request. Our current solution for the read model database is based on ElasticSearch storage.
To check out more information about how event sourcing works, see related article “Time traveling with event sourcing“.
Symfony3 is our application framework wrapping up the services. We also use PHP7 since it was introduced at the end of 2015. Each of our services provides a RESTful API, since our micro service platform will be communicating internally using API calls. Some of them also provide a simple web interface for administration or browsing its state and history.
We decided to use the technology of event sourcing since it makes it easier to investigate the exact history and data flow in our services. Also, every new feature that requires a new read model data can be recreated from the event history. In such cases, using event sourcing seemed the most reasonable. We also keep on improving our event engine.
The big picture
The whole platform is going to be event-driven, what means it will be similar to every service working internally. Whenever a new action will take place, it will produce an event which will be published into RabbitMQ message queue. All services will take appropriate actions depending on the events they are subscribing to. There will be one general service for user authentication and authorization, based on JWT (JSON Web Token). Since all services are designed as docker containers, Kubernetes will be used for the orchestration on the production environment. It simplifies auto-deployment and auto-scaling a lot and has multiple ops features out-of-the-box.