ScaleOps Composes New Score For Cloud-Native Orchestration
In enterprise technology, if there’s an opportunity to put an Ops-for-operations tag on the end of another technology term, then we generally go for it. Everything gets an Ops. In enterprise technology, if there’s an opportunity to put an Ops-for-operations tag on the end of another technology term, then we generally go for it.
The once unloved operations department (typically consisting of everyone from database administrators to site reliability engineers to penetration testers to plain old sysadmins and so on) has been elevated to star status in recent years.
ScaleOps is not a specialist in application or data service scaling from an operations team foundation as such. The Tel Aviv-based startup says it is on a mission to automate the wider management of Cloud environments to enable organizations to reduce Cloud costs.
Understanding Cloud Resource Orchestration
The company recently announced the release of its first fully-automated Cloud-native resource orchestration platform.
Resource orchestration might include tasks that come about because Cloud-native environments become increasingly dynamic and interconnected, which makes them highly complex and potentially quite tedious to manage.
Kubernetes’ native container sizing, scaling thresholds, and node type selection use static configurations, but consumption and demand are highly dynamic. Software application development engineers can find themselves spending their precious time manually adjusting Cloud resources to meet fluctuating demand, trying to avoid underprovisioning or overprovisioning.
According to Yodar Shafrir, ScaleOps’ co-founder and CEO, “Experienced software engineers spend hours trying to predict demand, running load tests and tweaking configuration files for every single container. It’s impossible to manage this at scale. We realized there’s a huge need for a context-aware platform.”
Automated Cloud-Native Resource Orchestration
The SCALEOPS platform continuously optimizes and manages Cloud-native resources during runtime i.e. when applications are actually executing and working. The technology is designed to ensure application scaling matches real-time demand. Instead of static allocations, it allocates resources dynamically, automatically rightsizing containers based on application needs.
“The only way to free engineers from ongoing, repetitive configurations and allow them to focus on what truly matters is by completely automating resource management down to the smallest building block: the single container,” added CEO Shafrir.
Company Background and Recognition
Co-founded by Yodar Shafrir (CEO) and Guy Baron (CTO) in 2022, ScaleOps manages the production environments of companies including Wiz, PayU, Orca Security, At-Bay, RTL, OutBrain, Salt Security and Noname Security plus others.
Cloud Complexity and Management Challenges
We hear a lot about how Cloud computing makes our life simpler – and that message is repeated not just for users, but it is also presented as a means of enabling the software engineering team to have a happier time and get what people like to call a better User eXperience (UX) across all tasks and workflows.
Being able to orchestrate base layers of Cloud to make sure that we have enough gas in the tank ought to be a given, but sometimes it’s clearly not, which is why companies like ScaleOps emerge.
In truth, the name ScaleOps does adhere to the same portmanteau-powered concept championed by DevSecOps, FinOps, MLOps, APIOps and of course NoOps – it’s just not quite a de facto industry term on its own, yetā¦ so watch this space.
Source: forbes
No Comments