DevOps Services 24/7
Spaceport is a team of DevOps engineers and Big Data specialists from Kyiv, Ukraine. We deliver reliable IT services to support your business growth. Cloud infrastructure design and management, building CI/CD pipelines, enabling smart monitoring and alerting, building bespoke Big Data solutions that meet your unique project requirements — we deliver value that will carry your company further.
Our expertise covers the following domains:
- DevOps
- 24/7 cloud infrastructure management, monitoring and alerting
- cloud environment optimization for scalability, security and cost-efficiency
- building CI/CD pipelines for software development and cloud infrastructure management
- application containerization and container management automation
- splitting the apps to microservices and operating them
- ensuring security and compliance of your data
- Big Data Analytics / Machine Learning/ AI
- Data mining and scraping, building crawlers and custom search indexes
- Big Data Analytics and processing using AWS or Google Cloud Big Data tools, as well as custom-tailored Big Data systems built for your unique projects
- Product personalization for your customers, predictive analytics to enable self-healing for your mission-critical systems
Our DevOps technology stack includes all AWS and GCP tools, cloud migration and IT infrastructure management solutions like Terraform, Kubernetes, Jenkins, CircleCI, Ansible, Gitlab CI, Prometheus & Grafana, ELK stack, SumoLogic and Splunk, etc.
As for Big Data service range, we provide AWS and GCP Machine Learning management, along with custom Deep Learning Networks and data processing systems (Apache stack, bespoke platforms) for various processes.
Recommended Providers
Focus
Portfolio
NLG, FQA, NLP

Financial Quantitative Analytics
We have developed a backbone of the Machine Learning platform for financial quantitative analytics. The core problem of existing manual analytics modelling methods is that they take a long time to build a model, while the data remains relevant only for 30 days or even less. Thus said, building a model should happen fast in order for it to remain relevant for long. This is a huge market with tremendous daily transactions activity.
We developed a backend part of the platform using neural network that stores lots of the model components for financial quantitative analytics. The user is able to quickly compose a model from blocks using a GUI, and the backend creates the specified model within the platform.
Using our model constructor helps create, adjust and manage quantitative analytics models on the fly. Our platform helps people without any ML knowledge tap into the ML power. This brings in the competitive edge and helps make impactful decisions faster, which is especially important in the financial analytics world.

NLP Job Board
There is another solution we built from scratch for a high-tech start-up — an innovative job board for Asian market. This job board must serve the healthcare and medical science industry and is oriented at top-tier researchers and scientists. There was a dual challenge — both the employers and the applicants cannot describe their skills precisely (either because they cannot lift the NDA or the project descriptions involve highly specific terminology that does not have a commonly accepted meaning yet). Therefore, the job board should be able to understand if the applicant’s skills are relevant to the job posting requirements and also recommend the applicants based on additional parameters. even if there is no exact match of their skills with the job requirements.
We have developed a fully-functional back-end prototype and are currently working on a GUI. The job board works the same as some other popular job portals — the applicant remains anonymous until they privately provide their contacts for the interview, so their current employers have no way of knowing their personnel is actively seeking for a better position. Thus said, our prototype addresses two main concerns of all job boards:
- Privacy
- Skill-relevance
The prototype deploys Word2vec system — a natural language processing algorithm that analyzes both the job postings and the applicants’ CV’s with the help of PoS (Part of Speech) tagging. This helps find correlations between the job requirements and the applicant’s skill summary. The process is augmented with our own algorithms and tuning. Reinforced learning of the ML model happens at the moment of Word2vec suggestion, so each successful skill match or addition of a new skill to the base improves the accuracy of suggestions.
This solution provides the following benefits:
- drastically reduces the time and money expenses on applicant search
- augments the applicant screening to highlight the most relevant applicants
- shortens the recruiter’s overhead
- allows augmenting the recruiter’s soft skills with the platform’s features.

Natural Language Generation Engine(NLG)
One of the main problems the analytics department in corporate business faces is the sheer volume of incoming and outgoing data, meaning the analysts have to write LOTS of text (most of which is never read). This is particularly painful for writing monthly or quarterly reports, especially if they are intended to be actually useful. We came up with a solution that lowered the quantity of human errors by 58% and provided a 71% faster report writing.
One of our customers needed a NLG solution to simplify the process of report creation. The main obstacle was that corporate business has to process a myriad of reports from underlying departments that do not have unified form and structure. We designed a PoC (proof of concept) – a tool that transforms MS Excel spreadsheets into text.
The spreadsheets can be of different length or width and can contain the data in varying cells, yet the algorithm must recognize and mark all the parameters.
It begins with data normalization and processing in order to allocate all the important data in incoming spreadsheets. Then the data is uploaded into the Template generator that works by Yes/No/Equal principle:
- if the new value is bigger than the previous, use the template “xxx increased”
- if the new value is lower use the template “xxx decreased”, etc.
- if the new value is essentially the same use the template “xxx is equal”, etc.
Creating such simple templates and building the trees of them allows to describe a huge variety of situations with a handful of simple parts that can be combined in any construction.
This helps create the custom-tailored multilayer report automatically with all the details already filled in. This simple tool helps deliver diverse and informative reports in minutes and ensures 58% decrease in human-errors with 71% faster report preparation.
Reviews
the project
Docker Container Management Services for Staffing Company
"They were able to offer a solution for every roadblock quite quickly and implement it flawlessly."
the reviewer
the review
The client submitted this review online.
Please describe your company and your position there.
My name is Dmitry, I am the CEO/Founder @ HireUkraine. We provide CRM/HRM/ job portal development and professional staffing services for technology companies.
For what projects/services did your company hire Spaceport?
When we started developing our product, we built it to work on a virtual machine, as it was the most resource-efficient way of resource allocation at the moment. However, as Docker containers are now widely used, we wanted to future-proof our platform by making it run in containers and use REST-ful APIs.
What were your goals for this project?
We needed to perform several tasks:
- Redesign our application to use RESTful APIs
- Split it to microservices that run in Docker containers to be able to support the ongoing growth of our product
- Enable simple and transparent container management
How did you select this vendor?
We contacted lots of IT service providers from different countries and selected Spaceport based on their obvious technical expertise. They impressed us during a technical call and we decided to move with them.
Describe the project in detail.
As we have already said, we needed to rebuild our application to enable the use of RESTful APIs and split it to microservices able to run in Docker containers, while also enabling a straightforward and repeatable platform for container management.
What was the team composition?
We worked with a Project Manager, DevOps engineers and back-end developer
Can you share any outcomes from the project that demonstrate progress or success?
They were able to deliver everything we asked. Their developers had deep knowledge of the best ways to operate blockchain applications and splitting them into microservices for performance, and their DevOps engineers had a good understanding of the best way to organize the infrastructure for redundancy and scalability.
How effective was the workflow between your team and theirs?
We discussed the sprint tasks daily and they reported on the progress daily through Slack and JIRA. We also held a weekly call with the PM and our executives to keep the hand on the pulse of the project.
What did you find most impressive about this company?
They have ample experience with projects of this kind, so they have ready solutions and polished workflows for getting the things done. They were able to offer a solution for every roadblock quite quickly and implement it flawlessly.
Are there any areas for improvement?
Their toolkit is not as wide as we would like — they did not use Docker Swarm, Rancher, Packer or Mesos/Marathon as we wanted them to. They used Terraform, Kubernetes, Docker and other popular open-source and platform-specific tools like Amazon CloudFormation.
I cannot say it is bad, though, as they explained the reasons behind this decision and these were pretty sound.
With the team's ample experience, their workflows were remarkably polished. Spaceport's developers understood how to best leverage blockchain technologies to divide the solution into microservices. And, their DevOps engineers knew how to organize the infrastructure for scalability.