You are here

Cloud Services

Interview IT Svit on Cloud and Big Data


Clutch spoke with Volodymyr Fedak, the CEO of IT Svit, about the intersection of cloud, big data, and DevOps.  Volodymyr outlines what factors companies should consider when selecting a cloud storage provider

Learn more about IT Svit on their Clutch profile or at


Please describe your organization and your position. What services do you offer relating to cloud, big data, and DevOps?

My name is Volodymyr, and I am the CEO of IT Svit. The business started in 2005, so we are celebrating our 12th year on the market. We started out as a small team providing hardware and system administration services and have come a long way to being a full cycle software and web development company with DevOps, artificial intelligence, and data science departments. We’ve also recently added a blockchain department. We have over 60 developers on-site, and the team is growing. Regarding cloud and DevOps services, we offer migration to and between clouds, infrastructure development on top of cloud providers, 24-7 support, workflow design, automation of existing workflows, security, and CI/CD [continuous integration/continuous delivery]. 


What has been the biggest challenge you've witnessed organizations face as they migrate to cloud computing?

Three technical challenges should be outlined. The first is the absence of documentation regarding infrastructures, applications, or competencies. This applies to the cases when even developers cannot tell exactly how something works. The second is the poor application design or when an application can’t be clustered or scaled natively. The final challenge is the presence of legacy components such as deprecated or old devices which are hard to reproduce in a new environment.  

A couple of human factors should also be mentioned, the first being hesitation. Sometimes a client will try to delay the migration procedure as much as possible, even after major bottlenecks in the existing infrastructure are eliminated and the system is stable. On the other end of the spectrum is haste. Some clients expect that infrastructure changes happen overnight and that they will be cheap and redundant by default.

What are some of the factors that companies need to consider when selecting a storage platform?

It is necessary to take into account not only the amount of data being stored but also the nature of that data. Block storage requirements are different to those of application content or databases. It’s best to consider the number of input/output operations per second, latency, amount of available disk space, cost of infrastructure, and the scaling capacity of the cloud provider. As an example, if we need 100 or 1,000 servers in a week, will the cloud provider be able to facilitate them? This is an important factor to consider before selecting a provider. 


How does big data tie into cloud storage and computing?

Cloud is elastic by its nature and seems to be an ideal choice for big data or data analysis. At the same time, the cloud’s distributed nature can be problematic for big data scaling. It is challenging to make storage perform to a level which enables this kind of distributed computing. Storage is the biggest reason why people wouldn’t use the cloud for big data processing. It is necessary to watch closely over scaling capacity since the data grows quickly in size, and it can be necessary to scale the storage just as quickly. When building an infrastructure on top of any cloud provider or host, companies should consider storage performance in a highly-virtualized distributed cloud. 

Big data analysis is usually a continuous, infinite process, which creates an uneven load to cloud hosts. If hosting is not dedicated but rather shared with the client, it can result in a degradation of overall host performance. 

Because there are several pieces of a big data engine running full tilt all the time, it is ideal for dedicated infrastructures. The other components may be variable and are a better fit for the cloud. 


How do you define the DevOps approach?

To me, DevOps is similar to Agile principles but adapted to infrastructure repeatability, component immutability, versioning, automation, continuous integration, and delivery or deployment. It also encompasses the documentation and integration between DevOps, development, operations, and QA teams. 

How does DevOps lead to improved IT infrastructure and cloud reliability?

The DevOps benefits come from its principles. Infrastructure repeatability ensures that all elements are stable and expected. When components transition or recovery is required, we know that everything will work properly. Components’ immutability leads to stable component operation during their lifecycle. It means we prefer not changing the settings of a running application, favoring the replacement of an application with a new state instead. 

Versioning ensures that infrastructure or component state is expected and predefined. It makes changes happen transparently. If a component has been upgraded, and there is a performance decrease, we know that a particular component version caused it and which version was stable, making the recovery process simple. 

The less human intervention there is, the better. When operations are automated, we spend less time on routing and more time on improvements. The CI/CD practice helps prevent integration problems, it makes changes happen smoothly. Documentation helps understand the underlying processes and share knowledge between teams. Integration between Dev, Ops, QA teams. Cooperation helps reveal bugs or issues in the earliest stages of development and fix them promptly.


AWS, Azure, and Google are considered three of the most prominent cloud computing providers. We ask that you rate them on a scale of 1 to 5 with 5 being the best score. 

How would you rate them for functionality and available features?

AWS – 5 – My score is based on its maturity.
Azure – 4 - Several features are still in beta or were added recently.
Google – 4 – Several features are still in beta or were added recently. 

How would you rate each provider for ease of use and ease of implementation?

We prefer following infrastructure as code approach and are using Terraform in order to achieve this.

AWS – 5 – Almost everything can be implemented using Terraform. 
Azure – 3 – Only the main services are available for implementation using Terraform. 
Google – 4 – Most services can be implemented using Terraform. 

How would you rate them for support, as in the response of their team and the helpfulness of available resources online?

AWS – 4 – Sometimes, we have to wait for several days just to get a reply saying that we’ve been forwarded to another support team, after which we have to wait for a reply from that team. 
We haven’t used the support of Google or Azure much. 

How likely are you to recommend each provider to a friend or colleague?

AWS – 5 – It is a reliable provider with a large number of services.
Azure – 4 – We do not use this provider much. 
Google – 4 – The provider is big enough, but some of its services are still in the beta stage. On the other hand, it is cheaper than Amazon. 

How would you rate them for overall satisfaction with the platforms?

AWS – 5 – Large amount of services, which integrate tightly with each other. The other Cloud Providers seem to take AWS as an example.
Azure – 4 – It has SLAs in place for its entire offering. 
Google – 3 – It would be good for Google to have SLAs in place for all the services, but some are still in beta.