Over the years we’ve delivered tens of projects. These are the highlights
Our team built a handling operations platform for 360JF that manages client and supplier contracts for flight handling using FBOs as suppliers. So requests, pricing, and service terms don't live across scattered spreadsheets and long email chains.
Web Platform, TypeScript, PostgreSQL, Workflow & Notifications, Audit Trail
Handling is contract-driven and detail-heavy: service scope and pricing depend on airport, supplier (often the FBO), and client terms—yet historically this lives in spreadsheets plus email threads. That creates two painful realities: teams waste time reconciling contract terms across versions, and one-off handling requests can explode into dozens of emails as details change (services, timing, special requests), with constant risk of missing the "latest agreed set."
We implemented structured contract management for both sides. Client handling contracts and FBO/supplier agreements, and connected them to day-to-day handling operations. Teams can select the location and client, immediately see which FBO agreements and terms apply, and produce a consistent service offer without manual cross-checking.
For execution, the platform keeps a single "source of truth" for what's agreed, with tracked changes and notifications so updates don't get lost in inboxes.
This is now in production and used daily, reducing operational back-and-forth and making handling coordination more reliable and auditable.
We built a fuel operations platform for 360JF to replace spreadsheet-based contract tracking, manual "fuel release" paperwork, and quote creation over email—turning fuel pricing, location agreements, and fuel delivery workflows into a single, controlled system.
TypeScript, Web Platform, PostgreSQL, AWS Cloud, CRM Integration
Maintaining fuel pricing that changes frequently across suppliers and locations, preventing data-entry mistakes without guard rails, and generating quotes fast without risking margin (or overpromising). Eliminating duplicate data entry into the CRM after quotes and fuel liftups.
We centralized supplier and client contracts (including location agreements) with guard rails and validation, then added bulk pricing imports so updates don't happen one line at a time. Quote creation became operationally realistic: a sales agent selects airport + client, the system pulls the relevant supplier agreements, applies pricing/terms, and surfaces options sorted from most favorable to least—then sends the quote and syncs it to the CRM automatically.
We also removed the grind from repetitive contract work: renewals are typically "95% the same," so contracts can be duplicated and edited, and new supplier rollouts across multiple airports don't require entering tens of near- identical agreements from scratch.
We built an RSB-compliant SAF inventory and certificate registry to enable a new SAF revenue stream, supporting virtual SAF credit sales while preventing double counting and ensuring a clean mass-balance audit trail from intake through issuance.
Web Platform, PostgreSQL, AWS, Audit Logging, Role-based Access, Reporting/Exports
Before the registry, SAF tracking lived in spreadsheets. It was workable until volume grew, then teams started stepping on each other's toes. This model adds strict compliance needs: every quantity must be traceable (from entry to sale), manipulation-resistant, and audit-ready—because mistakes in mass balance and availability tracking directly cause losses.
We designed and implemented a dedicated SAF inventory domain model (warehouses, batches, contingents, tanks) and connected the full lifecycle: intake → storage state → allocation → sale → certificate issuance. The platform enforces consistency checks, preserves an immutable audit trail of every movement/transaction, and provides a clear view of available quantities so coordinators can scale throughput without spreadsheet chaos while staying aligned with RSB registry expectations.
Our team stabilized a troubled music royalty ETL pipeline by filling the critical gaps left by a previous vendor — delivering serverless dev environments, automated testing, and centralized documentation.
Python, AWS, AWS Glue, Lambda, S3, RDS (MySQL), Amazon Athena, Step Functions, CDK
Existing data pipeline with poor documentation
Solid foundations of the platform, but very important pieces were missing:
Limited resources and time for stabilizing the process Multiple parties involved without a clear separation of responsibilities
During the kick-off, the Director of Technology and Business Solutions told us that the ETL Pipeline platform is not an exciting project to work on, has several critical
issues left by a previous vendor, and that people were very dissatisfied in the past. His biggest concern was the instability of the platform caused by both technical
problems and the churn of the engineers working on it.
As always, we applied a basic analytical approach for onboarding onto the existing platform:
A month in, we created and presented the review document of the platform. The verdict was that the foundations are solid, and the quality of the engineering work that was put in by the previous vendor was admirable. However, the last 20% of the platform that was missing was creating a lot of problems and led to a very bad experience and results. Senior management was very supportive and gave us the green light and autonomy to implement all the missing pieces. Together with them, we created a plan and a timeline to work on these improvements along with implementing new features and integrations.
After a couple of months we had:
Our team developed ETL and core services for a retail client's Digital Asset Management platform, improving data consistency through advanced Kafka Streams patterns and enabling proactive issue detection through enhanced monitoring and observability.
Java, .NET, Confluent Kafka, KSQL, Kafka Streams, PostgreSQL, CouchDB, Docker, AWS, ECS, Lambda, DynamoDB
Navigate through large and complex system. Support several teams with required data, work as a middleware data provider between multiple APIs and layers. Data inconsistency due to limitations of older version of Kafka Streams. Monitoring and observability of the system was not at desired level.
When we started the project, a plan for solving data consistency was already in place. However, experience in the existing team regarding Kafka Streams was not at the highest level. We immediately spotted room for improvement and proposed a different approach which included KTable-KTable join. That improved data quality in just a few days of work. This wasn’t the final solution, but helped the team lower the number of incidents happening due to poor data quality and cut support time invested by the team in half.
When work on the long term solution started we provided the team with useful feedback preventing several scenarios in which data consistency was at stake.
While working on the new solution and supporting the existing system, we devised the plan on how to improve on monitoring and alerting which was done as a low priority track during the upcoming months. As a result, the team now usually acts on an issue even before the incident is being reported by other dependent teams or business. This helped the team achieve more focused time on planning and executing and lower the pressure and sense of urgency.
As an interim CTO, Milan led the team of software developers and data scientists and delivered the highly available healthcare research platform from greenfield to production.
Java, MongoDB, IBM Cloud, IBM Watson, Ansible, GoCD
Navigating complex healthcare privacy and security regulations as well as the external audit from one of the biggest pharmacology companies in the world. All with very tight deadlines and limited resources.
In order to deliver an enterprise-ready platform we had to, among all the engineering effort, fully automate and document infrastructure set up, document all the relevant procedures (business continuity plan, privacy and security design and considerations, backup and restore procedures and so on).
Even though we were building an MVP, we quickly realized how often the requirements change and due to complexity of the data flow we decided to heavily cover the platform with the integration test suite from early on. This decision was crucial to the product's success later on - it allowed us to be confident when making changes requested by the external auditors just before putting the platform in production.
By being very careful and proactive about the scope and feature set, we managed to put the platform in production on time and budget and onboard the first patients on time and budget.
Vladimir worked as a software architect with multiple development teams to create event-driven, highly scalable platform based on microservices from greenfield to production.
Java, Spring Boot, Apache Kafka, Schema Registry, Apache Cassandra, Redis, Elasticsearch, PostgreSQL, Jenkins, Ansible
Making a highly configurable system as multi-tenant is, with a large amount of regulatory requirements for each country. Keeping low latency within the critical features while developing a highly scalable and distributed system.
Aim of the project was a system rewrite of a monolithic platform which was in development for more than 12 years. Vladimir started the project with the requirements gathering sessions upon which the architecture proposal was written. Requirements gathering pinpointed two major challenges that new architecture will face, leading to a Proof of Concept (PoC).
After successful PoC, the development started and the team was growing to meet the needs of the project. The challenge was to keep architecture and code as uniform as possible to allow for easy onboarding and flexibility when team members switch between services. This also provided the simplicity in build and deployment pipelines.
The platform was put in production without major problems or reworks.
As an interim CTO, Milan led the product development team working on the enterprise level platform with strict privacy and security requirements. Under his leadership, delivery was streamlined and made predictable, productivity was increased and the developers’ onboarding process shortened.
Python, AWS Cloud (S3, CloudFormation, Quicksight, Athena)
Complex system that was hard to maintain and extend. AWS S3 was used as a database. Technical leadership left the company and the development team had a lot of trouble delivering features in the predictable manner, and ownership and team motivation was very low.
Milan joined the team as a backend engineer. He introduced and followed Scrum methodology and improved the transparency of the product development process. Once the entire product team adopted the process and saw that the transparency was a good thing, the ownership and motivation increased drastically. This led to Milan being assigned an interim CTO role. On the technical side, we identified both the strong sides of the current platform (reliability, privacy by design), and the weak points (hard to extend, complex, undocumented). When we improved the documentation (a lot!) and started building new components on the edge of the system (which use the API to communicate with the core), developer’s productivity increased a lot and the onboarding process was shortened.
Being obsessed with automated software testing and quality assurance, Milan created a declarative test data generator for Java. It allows easy specifying test data format and quickly generating - the goal was to create the opposite of the SQL for test data. Instead of querying the data, it generates the data - instead of “give me all the users born in 1981, having a driving licence and owning a car ” the data generator allows for this kind of “create me 10 000 random users out of which 150 will be born in 1981 and have a driving licence and owning a car”. The library heavily relies on generics and is used for generating test data for an automated test suite for several projects in production.