Data and AI hardware infrastructure optimization: Granulate scores $ 12 million in Series A funding


Since the world is still in possession, the economy is taking a result and the uncertainty reigns supreme, we do not see as much funding news as we used to these days. It’s no surprise. What is surprising is watching today’s news.

granulate, an Israeli-based company that optimizes real-time infrastructure performance and allows businesses to reduce computational costs and increase revenue, announced today that it has raised $ 12 million to fund Series A. The round is led by Insight Partners, one of The largest global funds focused on investing in upscale software companies, with the participation of TLV Partners and Hetz Ventures.

ZDNet connected with Asaf Ezra, co-founder and CEO of Granulate, to talk about what makes Granulate special enough to be able to pull this through.

Low-level optimization, high-level cost reduction

Ezra and his co-founder, Tal Saiag, have a story shared by many Israeli startup founders. They met over a decade ago and served together in one of Israel’s largest technological intelligence units. Ezra said that in addition to the fact that this enabled them to hone their troubleshooting and initiative, it also had some technical impact on them.

Ezra and Saiag were exposed to the intricacies of Linux servers and the Linux kernel. They learned how to fine-tune Linux for better performance, and by talking to business users and Linux experts, they realized there was a recurring theme there. Not only in the need for low-level optimization to get better performance under stress, but also in ways that this could be achieved.

Granulate was founded in 2018, and its founders have assembled a team largely comprised of graduates from the same intelligence unit of which they were a part. Initially, they aimed to develop a cyber security product, but in the end they were all about hardware performance optimization.

granule io.jpg

Granules promise something difficult to pass on, especially in today’s economic climate: do more with your hardware without upgrading.

Ezra recognized that cost reduction in today’s economic climate is a key goal for organizations. In that sense, Granulate can be one of the lucky ones to actually benefit from the new priorities: “We are in a unique position to help companies with the most pressing issue at the moment, cost reduction, so this is also an opportunity , “Ezra said.

But how does Granulate help organizations optimize their infrastructure and reduce their costs? This is the $ 12 million issue we discussed at length with Ezra. The saying “optimized infrastructure” means that they solve some very specific low-level problems, such as thread scheduling, connection pooling and communication between processes.

Anyone who has ever worked with a server knows how difficult it is to tackle these. Granulate chose to focus on Linux servers for several reasons. First off, Linux dominates data centers everywhere. So this made sense from a market segment and data accessibility perspective. Also, Linux’s open source art meant that Granulate was able to work with it without roadblocks.

The concept of agent is central to how Granules work. Granulated agents consist of the kernel and user-level components and can be installed on any Linux server (just metal or virtual machine) and support any architecture, be it data centers, multi or hybrid environments.

The agents derive from the resource consumption patterns how to best fit the machine to the load applied to it, creating a streamlined environment for the application. What’s more, in the process, agents collect performance metrics that can be integrated into existing monitoring tools such as Prometheus, New Relic or AppDynamics.

Autonomous agents

Ezra said Granulate’s monitoring tool integrations are performed on a case-by-case basis. However, agents are collecting metadata that is not far from what monitoring tools are already collecting, so adding new integrations is not too difficult. The same goes for distribution. Granulants are self-sufficient, so there is not much difference in how they are placed in different environments.

Granulants can also be used with the help of Chef or Ansible or other tools. In bare metal, funds can still intervene to a greater extent. Part of that has to do with the fact that agents have access to more data, more server parameters, in bare metal environments.

So granular agents attach to servers, collect data and then use that data to identify usage patterns that allow them to optimize those servers. We got the impression that this looks like a typical case of machine learning. We were right that it is machine learning, but wrong about the typical part.

The role of agents is not only to collect data and then send them to Granulate’s servers through a pipeline to train machine learning models. The agents are truly autonomous: not only data collection, but machine learning training and inference all participate in the agents.

bigabid-graph-new.png

Granulate’s secret sauce is its autonomous means. The agents use machine learning to identify patterns in server utilization and intervene to optimize it.

This happens for several reasons, as Ezra explained. This approach allows Granulate agents to work with less data and less latency. Perhaps more importantly, each agent experiences different loads as it is distributed on a server with unique workloads. This autonomous approach allows each agent to adapt to his or her environment.

One point that Ezra stretched is that agents use localized data generated by repetitive processes, such as e.g. To serve a web server request. This process is repeated many times over a set amount of time, and the steps agents take are to focus on the recent history of the server they are linked to, rather than using historical data to train their models.

Agents may get some initial configuration parameters, but after this point, they rely on real-time data and quickly build their own model specific to their environment. Ezra was convinced of this approach: “You don’t want what happened a week ago that affected the strategy for what you do now. The stresses could be completely different”.

The sliding window for the data agent monitor is actually just a few seconds long. Environmental agent data used in is also used, but in a different way. In a cluster running a type of server Granule has worked with before, things like the number of machines or CPU and memory parameters of the machines become features of the machine learning model for the agents.

100% utilization is not recommended, but performance improvement is

Infrastructure optimization is good. But is there such a thing as excessive optimization? In one of Granulate’s use cases, where the priority was to maximize cost reduction, the client began reducing cluster size and removing machines. Granules report that they continued until they reached pre-granules BEAT, and achieved an astonishing 33% cost reduction calculation.

“Most companies run with 35% IT infrastructure utilization or less due to strict service and stability needs. Granules resolve the exchange between service quality and cost, giving customers improved results in both,” Ezra said.

However, he went on to add that achieving 100% utilization is something they do not recommend to anyone. Clusters offer failover capability, so removing this “slack” could completely jeopardize performance or even lead to downtime if there is a traffic stop or hardware failure.

tablet3d.png

Cost savings are one way to take advantage of infrastructure performance optimization. Another, popular in AdTech, is to keep the same infrastructure and achieve better results using it.

But there is another way to benefit from performance optimization, Ezra said: “You don’t necessarily have to reduce the size of the cluster. You can reduce the capacity of the machines in the cluster while improving performance and getting a lot of ceiling height. You can scale and make better decisions. ”

As Ezra went on to add, not everyone has the same goal – minimizing infrastructure-related costs. Others choose to keep the same infrastructure, but benefit from the increased performance they can get from it. In e-commerce or AdTech clientsfor example, increased performance means a competitive advantage.

In a way, Ezra said what Granulate does is equivalent to having a dedicated system administrator for monitoring and fine-tuning servers at all times, but faster and more efficiently than any system administrator could possibly.

In terms of plans, the goal is to triple the size of Granulate’s team and expand its clientele. Ezra said he thinks of Granulate’s Series A not just as a cash injection, but as a partnership, as Insight Partners has a proven track record in scaling up SaaS companies.



Source link