Kubernetes – EvaluateSolutions38 https://evaluatesolutions38.com Latest B2B Whitepapers | Technology Trends | Latest News & Insights Thu, 04 May 2023 18:25:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.6 https://dsffc7vzr3ff8.cloudfront.net/wp-content/uploads/2021/11/10234456/fevicon.png Kubernetes – EvaluateSolutions38 https://evaluatesolutions38.com 32 32 Sumo Logic Enhances its Observability Platform with Predictive Analytics https://evaluatesolutions38.com/news/data-news/big-data-data-news/sumo-logic-enhances-its-observability-platform-with-predictive-analytics/ https://evaluatesolutions38.com/news/data-news/big-data-data-news/sumo-logic-enhances-its-observability-platform-with-predictive-analytics/#respond Thu, 20 Apr 2023 14:08:28 +0000 https://evaluatesolutions38.com/?p=52123 Highlights:

  • Sumo Logic Inc. has recently developed features that forecast application, cloud, infrastructure consumption, and resource demands based on previous data.
  • Hundreds of businesses use the cloud-based data analytics software from Sumo Logic to gain insights into the state of their IT infrastructure.

Sumo Logic Inc., the creator of an analytics-based platform for application performance management and observability, has recently developed features that forecast application, cloud, infrastructure consumption, and resource demands based on previous data.

It is also increasing its support for the OpenTelemetry set of tools, application programming interfaces, and software development kits for instrumenting, generating, collecting, and exporting telemetry data.

Hundreds of businesses use the cloud-based data analytics software from Sumo Logic to gain insights into the state of their IT infrastructure. Its software includes log management, cloud monitoring, software container management, microservices, and cloud security.

Full-stack predictions

Sumo claims to be the first and only full-stack observability platform that delivers predictive analytics for metrics, events, logs, and traces, the essential data components of observability. The service is designed to reduce resource constraints and unanticipated system burdens while mitigating the uncertainty that fluctuating cloud usage introduces to capacity management.

According to the company, Predict for Metrics employs linear and autoregressive models to make predictions by leveraging historical data points to forecast future trends. Metrics query language operators let users view anticipated numbers and integrate them into Sumo Logic dashboards.

Sumo Logic already has predictive capabilities for logs, the company’s native data type. Erez Barak, the company’s General Manager and Vice President of engineering, said, “Our origin was in logs, so that made it easier for us to foray into the prediction world. Since we have that baseline of log data, we can establish a baseline for metrics as well.”

Barak stated that the company has obtained an acceptable level of accuracy in internal evaluations and with early adopters. Although Sumo Logic employs aggregated and anonymous data to enhance the quality of its overall predictions, he stated that, “we would never have a situation where one customer’s data is used for another customer.”

The feature may also be used to predict how many Sumo Logic credits will be utilized, saving users from unforeseen costs. It may also do analytics on application performance management trace data to forecast the load on an application or its underlying microservice, allowing customers to better estimate how much CPU, memory, and storage space to supply across Amazon Web Services Inc.

Organizations can now also determine which resources, such as those provisioned for AWS DynamoDB or Provisioned Memory for AWS Lambda functions, will run out of capacity.

Aboard the OpenTelemetry Express

Sumo Logic announced this week at the KubeCon + CloudNativeCon event that it has streamlined the process of getting customers up and running on its platform and added support for the Windows operating system to its Distribution for OpenTelemetry Collector.

More than 30 applications are currently using OpenTelemetry for database, server, and infrastructure monitoring, according to the company. Sumo Logic Distro for OT is a product that can capture telemetry data from MacOS, Linux, and Windows platforms using a singular collector.

OpenTelemetry is acquiring traction as a means for organizations to standardize their observability practices and monitor metrics, logs, and traces collectively instead of separately. Despite the fact that most observability vendors support OpenTelemetry to some extent, Barak claimed that some need open data to be combined with proprietary data types, necessitating users to maintain multiple back ends and increasing the risk of vendor lock-in.

On the other hand, Sumo Logic claims it offers a method for ingesting OpenTelemetry data via a single installation, which helps reduce the various manual data integration steps to a single workflow that can be executed in less than five minutes. The unified agent also facilitates the consolidation of observability onto a single platform.

Barak said, “We’re leaning into this by taking our most important workflows and making sure they work out of the box with OpenTelemetry data. In the last six to 12 months we’re seeing more and more customers saying their future is OTel. That has become their No. 1 driver of strategy and tool consolidation and also for pulling in developers.”

Open Telemetry is the second fastest-growing project in the Cloud Native Computing Foundations ecosystem, after Kubernetes, as measured by the number of developer contributions.

]]>
https://evaluatesolutions38.com/news/data-news/big-data-data-news/sumo-logic-enhances-its-observability-platform-with-predictive-analytics/feed/ 0
Canonical Introduces Charmed Kubeflow MLOps on AWS https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/canonical-introduces-charmed-kubeflow-mlops-on-aws/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/canonical-introduces-charmed-kubeflow-mlops-on-aws/#respond Thu, 13 Apr 2023 17:11:31 +0000 https://evaluatesolutions38.com/?p=52011 Highlights:

  • With Charmed Kubeflow on AWS, users can now quickly launch and manage their machine learning workloads.
  • In 2022, 35% of organizations were expected to adopt AI, according to IBM’s Global AI Adoption Index.

Canonical Ltd., an Ubuntu software provider, released its machine learning operations toolset Charmed Kubeflow on Amazon Web Services Inc.’s cloud marketplace.

Charmed Kubeflow is available as a software application on AWS, simplifying the deployment and management of machine learning workloads for businesses. The software is an enterprise-grade variant of Kubeflow, an open-source MLOps toolkit designed to work with Kubernetes, the ubiquitous container orchestration software for application containers. It provides various utilities that make operating artificial intelligence on Kubernetes simpler.

Canonical says Charmed Kubeflow on AWS helps enterprises experiment with machine learning processes. It occurs when an increasing number of organizations demonstrate an outsized interest in artificial intelligence and machine learning. According to IBM Corporation’s Global AI Adoption Index, 35% of businesses will adopt AI in 2022. With the proliferation of generative AI initiatives such as ChatGPT, interest in the technology is proliferating.

Charmed Kubeflow on AWS, according to Canonical, is designed for businesses looking to launch their AI and machine learning initiatives because it is simple to deploy and offers unlimited computing power to experiment without limitations.

Charmed Kubeflow creates a trustworthy application layer for model development, iteration, and production deployment by automating machine learning workflows. Additionally, it offers complete visibility into those workloads so teams can evaluate any difficulties and precisely plan their infrastructure expansion needs.

Users can deploy their models on end devices after the complete experimental phase. Simultaneously, Charmed Kubeflow will guard against cyberattacks with frequent scanning, patching, and updates to the most recent version of the machine learning libraries in use. Users may move artifacts from the Charmed Kubeflow appliance to an AWS or data center deployment for production-grade deployments.

According to Aaron Whitehouse, senior director of public cloud enablement at Canonical, Charmed Kubeflow is the best platform for businesses looking to experiment with machine learning for the first time. He said, “The Charmed Kubeflow appliance on AWS gives companies a great way to test out machine learning possibilities quickly and easily, with a clear pathway to a scalable hybrid/multi-cloud deployment if those pilot projects are successful.”

A fully managed version of Charmed Kubeflow on AWS is now accessible through the AWS Marketplace for businesses needing infrastructure support.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/canonical-introduces-charmed-kubeflow-mlops-on-aws/feed/ 0
F5’s Secure Multi-Cloud Networking Solutions Simplify Operations for Distributed Application Deployments https://evaluatesolutions38.com/news/cloud-news/f5s-secure-multi-cloud-networking-solutions-simplify-operations-for-distributed-application-deployments/ https://evaluatesolutions38.com/news/cloud-news/f5s-secure-multi-cloud-networking-solutions-simplify-operations-for-distributed-application-deployments/#respond Wed, 22 Mar 2023 17:20:22 +0000 https://evaluatesolutions38.com/?p=51606 Highlights:

  • F5 Distributed Cloud Services links applications at the network and workload levels by offering businesses an integrated services stack.
  • Microservices and API-heavy distributed applications have grown in popularity while cloud and hybrid architectures have proliferated.

F5 Inc., a company specializing in network traffic management and application security, recently announced new multi-cloud networking capabilities that extend application and security services across cloud platforms, hybrid architectures, native Kubernetes environments, and the network edge.

F5 Distributed Cloud Services’ new features offer connection and security at the network and application layers. This allows businesses to securely connect applications hosted in various locations while operating across multiple computing environments.

F5 asserts that doing this has, up until now, been extremely challenging. As a result of the growing adoption of multi-cloud, the typical organization currently manages hundreds of applications across many distributed computing platforms. They need an integrated services stack to link such applications at the network and workload levels. As a result, the lack of communication between applications increases the complexity of telemetry data collection, reduces visibility, and increases the attack surface.

According to Michael Rau, Senior Vice President, and General Manager of distributed cloud platform and security services at F5, secure app-to-app communication is a goal for any digital enterprise. Yet, he said, only some have succeeded in doing so.

Rau said, “The proliferation of cloud and hybrid architectures has coincided with the rise of microservices and API-heavy distributed applications — all of which contribute complexity and diminish visibility. F5’s platform-based approach greatly expands our ability to serve customers’ hybrid and multi-cloud use cases.”

According to the company, F5 Distributed Cloud Services gives businesses access to an integrated services stack that integrates applications at network and workload levels. With comprehensive networking and application security features and quicker installation, companies can manage their dispersed applications from a single console.

With the automated or one-click provisioning of additional security services like web application firewalls, API security, and DDos or bot mitigation, it offers advanced networking services for applications across any cloud or environment. These services include load balancing, ingress/egress controls, API gateways, and visibility.

To increase security and hasten the delivery of apps, the service also gains from native Kubernetes integration, which enables fine-grained control for particular applications without exposing the underlying network. According to the firm, the Distributed Cloud Network Connect architecture from F5 makes it easier to integrate and connect apps to additional cloud locations and providers with automatic provisioning.

According to Zeus Kerravala of ZK Research, these networks must offer application-layer connection as enterprise cloud strategies go from using various apps and clouds to a real multi-cloud architecture with distributed workloads. “F5 has long been a leader in application networking, and its Distributed Cloud Services provides a fully integrated set of layer 3 to 7 services for securely connecting across clouds and workloads, even those deployed at the edge or branch office,” he said.

]]>
https://evaluatesolutions38.com/news/cloud-news/f5s-secure-multi-cloud-networking-solutions-simplify-operations-for-distributed-application-deployments/feed/ 0
Google Cloud Automates Its Global Multiplayer Game Servers on Kubernetes Engine https://evaluatesolutions38.com/news/cloud-news/google-cloud-automates-its-global-multiplayer-game-servers-on-kubernetes-engine/ https://evaluatesolutions38.com/news/cloud-news/google-cloud-automates-its-global-multiplayer-game-servers-on-kubernetes-engine/#respond Wed, 22 Mar 2023 16:47:31 +0000 https://evaluatesolutions38.com/?p=51597 Highlights:

  • By dynamically scaling up and down to accommodate player counts, Google Cloud manages the underlying Kubernetes clusters with GKE Autopilot and Agones, ensuring the game server can function flawlessly.
  • GKE Autopilot can integrate with other crucial cloud services, including analytics, matchmaking services, and customized network proxies.

According to Google Cloud, running worldwide multiplayer game servers on Google Kubernetes Engine Autopilot is now a fully automated experience for game developers.

Software containers, which host the elements of contemporary agile applications, including video games, are managed and orchestrated through GKE, a cloud-based service. It provides a platform for creating and sustaining massively multiplayer online games. With GKE Autopilot, developers can access a fully-managed, serverless version of GKE, where Google automates all the overheads.

Game developers can now use GKE Autopilot to serve players worldwide, according to a blog post by Google Kubernetes Engine’s Senior Product Manager, Ishan Sharma.

He clarified that most developers don’t want to be concerned with the underlying cloud infrastructure. They simply want it to grow with player traffic so they can concentrate on adding new features to their games.

Sharma wrote, “At Google Cloud, we are fixated on making game launches boring by making GKE Autopilot the platform-of-choice for running game workloads for scalability, reliability, and automation.”

With the help of the brand-new, open-source game server orchestrator Agones, Google Cloud has made it possible to run dedicated game servers on GKE Autopilot.

The game server, which acts as the definitive source of the game’s state and to which players must connect to interact with the game, is one of the most important components of multiplayer games.

As a result, game servers must continue to function flawlessly at all times and connect thousands of players without any interruptions.

By dynamically scaling up and down to accommodate player counts, Google Cloud manages the underlying Kubernetes clusters with GKE Autopilot and Agones, ensuring the game server can function flawlessly. Developers are no longer required to worry about setting up the underlying infrastructure.

Sharma explained, “With traditional Kubernetes, this scaling requires resources and time for planning, rightsizing and bin-packing. You might overprovision node pools much earlier in anticipation of scaling up and keep those node pools running longer before scaling down. All this costs money.”

According to analyst Holger Mueller of Constellation Research Inc., gaming is a key cloud application because it necessitates elasticity to provide gamers with the best experience, leading to unpredictably high and low usage.

He added, “Running the infrastructure for these gaming workloads manually quickly becomes an expensive endeavor, and mistakes are often made, leading to higher costs or a poorer gaming experience. So automated infrastructure, which Google Cloud now offers with GKE Autopilot, is critical. Combine this with Google’s super-fast network, and you have a very compelling platform for game workloads.”

Google made a number of justifications for why game designers ought to think about hosting their games on GKE Autopilot. Developers only pay for the CPU, memory, and storage their servers use with GKE Autopilot.

Therefore, there is no risk of incurring charges for unused capacity or operating system overheads and components. Google quotes a Forrester study demonstrating how GKE Autopilot can cut infrastructure costs by as much as 85% and boost developer productivity by as much as 45%.

Google emphasized that developers won’t be forced into using its cloud if they use GKE Autopilot. Sharma noted that many game developers use multiple-region cluster fleets, on-premises settings, and other cloud platforms to run workloads.

He noted that games remain portable and flexible because Agones is open source and GKE Autopilot is based on open-source Kubernetes.

Multitarget parallel deployment, a feature of Google Cloud that enables developers to roll out new features or updates in a particular area, is another advantage.

Therefore, before releasing the new feature to a larger audience, developers can build a new GKE cluster for a small area, deploy it, and gauge user reaction.

Finally, Sharma noted that GKE Autopilot could integrate with other crucial cloud services, including analytics, matchmaking services, and customized network proxies.

]]>
https://evaluatesolutions38.com/news/cloud-news/google-cloud-automates-its-global-multiplayer-game-servers-on-kubernetes-engine/feed/ 0
Oracle Makes Kubernetes Deployment and Management Easier in its Cloud https://evaluatesolutions38.com/news/cloud-news/oracle-makes-kubernetes-deployment-and-management-easier-in-its-cloud/ https://evaluatesolutions38.com/news/cloud-news/oracle-makes-kubernetes-deployment-and-management-easier-in-its-cloud/#respond Tue, 21 Mar 2023 14:01:23 +0000 https://evaluatesolutions38.com/?p=51582 Highlights:

  • The Oracle Container Engine for Kubernetes has a fully controlled control plane and is entirely compliant with Cloud Native Computing Foundation constructs.
  • According to Oracle, when hosting Kubernetes on competing for public clouds, clients can save up to 50% and benefit from additional services not included in Kubernetes clusters.

Recently, Oracle Corp. unveiled new feature updates to its cloud-based Oracle Container Engine for Kubernetes, claiming that it can streamline operations, lower costs, and increase reliability and efficiency in large-scale systems using the Kubernetes orchestrator for software containers.

The improvements are intended for businesses using agile DevOps methods and constructs like microservices to build and run cloud-native apps on Oracle Cloud Infrastructure. Vijay Kumar, Vice President of product marketing for application development services and developer relations at Oracle, said, “Kubernetes is notoriously complex not only to operate but to find the people with deep skill sets. We’re dramatically simplifying the deployment and operations of Kubernetes at scale.”

50% or Less Costs Compared to Competitive Public Clouds

According to Kumar, Oracle supports Kubernetes in various runtime settings, from bare metal to serverless operations. The Oracle Container Engine for Kubernetes has a fully controlled control plane and is entirely compliant with Cloud Native Computing Foundation constructs. According to Oracle, when hosting Kubernetes on competing for public clouds, clients can save up to 50% and benefit from additional services not included in Kubernetes clusters. Kumar added that Oracle provides uniform pricing across all international zones to reduce complexity.

Leo Leung, Vice President of products and strategy at Oracle, said, “A big piece of Kubernetes is compute and on a computer-by-computer basis, we’re less than 50% of the list price of the lowest-cost region of other providers. Then there are additional parts of Kubernetes that require compute to boot up the cluster, and we’re lower cost there as well.”

The improvements come with virtual nodes, which let businesses run Kubernetes-based apps dependably and at scale without dealing with the operational burden of managing, expanding, upgrading, and troubleshooting the underlying Kubernetes node architecture. With usage-based pricing, Virtual Nodes additionally offer pod-level elasticity.

Leung added, “Customers that are deep into Kubernetes may want to have control over worker nodes to get fine-grained control over the infrastructure, such as running all pods inside bare metal. For the majority of customers, though, we believe serverless is the right answer. They don’t want knobs and dials. They want a service that’s going to scale.”

Encompassing Lifecycle Management

With complete lifecycle management that includes deployment, upgrades, configuration changes, and patching, the improvements provide enterprises with greater freedom in installing and configuring their preferred auxiliary operating software or associated applications. Add-ons include access to optional software operators, including the Kubernetes dashboard, the Oracle database, and Oracle WebLogic, and necessary software deployed on the cluster, such as CoreDNS and kube-proxy.

Controls for identity and access management at the pod level are now accessible. The number of worker nodes by default for newly-provisioned clusters has been raised to 2,000. Support for inexpensive spot instances has been enabled—service-level agreements with financial backing for the worker nodes and API server for Kubernetes.

Having the capacity to grow to thousands more nodes, Kumar said, “you can have a fairly large application running on a Kubernetes cluster without having all the networking between clusters.”

]]>
https://evaluatesolutions38.com/news/cloud-news/oracle-makes-kubernetes-deployment-and-management-easier-in-its-cloud/feed/ 0
Cast AI, a Kubernetes Operations and Cost Management Startup Receives USD 20M for Cloud-native Solutions https://evaluatesolutions38.com/news/cloud-news/solutions-news/cast-ai-a-kubernetes-operations-and-cost-management-startup-receives-usd-20m-for-cloud-native-solutions/ https://evaluatesolutions38.com/news/cloud-news/solutions-news/cast-ai-a-kubernetes-operations-and-cost-management-startup-receives-usd-20m-for-cloud-native-solutions/#respond Mon, 20 Mar 2023 18:08:26 +0000 https://evaluatesolutions38.com/?p=51573 Highlights:

  • Companies that link their Kubernetes clusters to the Cast AI platform can access powerful cloud-native automation techniques for instant cost savings and view recommended suggestions.
  • Cast AI has seen quarterly revenue growth of over 220%, from an undisclosed base, since the platform’s debut.

Cast AI Group Inc., a startup providing cost and operations management for Kubernetes, announced that it had raised USD 20 million in fresh funding to seize a sizable opportunity in the rapidly expanding cloud-native solutions market.

Cast AI, a 2019 startup, provides a cloud optimization platform that has reduced customers’ cloud costs for Google Cloud, Microsoft Azure, and Amazon Web Services Inc. by half. The platform uses artificial intelligence to analyze data points to determine the ideal cost-performance ratio, optimizing them in minutes.

Companies that link their Kubernetes clusters to the Cast AI platform can access powerful cloud-native automation techniques for instant cost savings and view recommended suggestions. According to reports, the platform recently assisted the social media unicorn ShareChat, ad tech firm Iterable Inc., and the world champion in mobile analytics, Branch Metrics Inc., to save millions of dollars yearly.

Cast AI has seen quarterly revenue growth of over 220%, from an undisclosed base, since its debut, due to its capacity to offer optimization solutions that simplify managing cloud-native applications. According to Cast AI, this service is desperately required in today’s tech-driven world.

Hitachi Ltd., Forbes Media LLC, Samsung Next LLC, Snow Commerce LLC, Surfshare Inc., and Delio Ltd. are notable Cast AI clients.

Yuri Frayman, Chief Executive Officer at Cast AI, said, “This funding is just in time to take advantage of the tremendous opportunity in the market as more and more companies transition to containerized applications in the cloud. With this investment, we can grow as a leading provider of intelligent cloud optimization solutions globally and expand our all-in-one platform capabilities to more cloud-native ecosystems and use cases.”

With a USD 15 million investment, early-stage venture capital company Creandum Advisors AB headed the round, with the remaining USD 20 million coming from prior investors.

Cast AI has now raised USD 43.2 million, including additional funding. Previous backers of the startup include Tesonet UAB, DNX Ventures LLC, Florida Funders LLC, Scale Asia Ventures Pte. Ltd., Samsung Next, and Scale Asia Ventures Pte. Ltd.

]]>
https://evaluatesolutions38.com/news/cloud-news/solutions-news/cast-ai-a-kubernetes-operations-and-cost-management-startup-receives-usd-20m-for-cloud-native-solutions/feed/ 0
PerfectScale Launches Kubernetes Performance and Cost Optimization SaaS Tools https://evaluatesolutions38.com/news/cloud-news/perfectscale-launches-kubernetes-performance-and-cost-optimization-saas-tools/ https://evaluatesolutions38.com/news/cloud-news/perfectscale-launches-kubernetes-performance-and-cost-optimization-saas-tools/#respond Thu, 09 Mar 2023 17:01:56 +0000 https://evaluatesolutions38.com/?p=51401 Highlights:

  • PerfectScale provides tools that allow teams to optimize the performance of hundreds of Kubernetes clusters that are the foundation of their most critical applications.
  • PerfectScale platforms can aid any flavor of Kubernetes, consisting of Google LLC’s GKE, Microsoft Corp.’s Azure AKS, Amazon Web Services Inc.’s EKS, and bare-metal deployments.

PerfectScale Inc. announced the general availability of its constant optimization platform for Kubernetes recently, giving enterprises a new option to automate the solidity of their information technology environments.

The software-as-a-service platform is directed at companies that control distributed, large-scale Kubernetes environments that manage modern, containerized applications. As PerfectScale clarifies, optimizing these environments is a manual, time-consuming, and challenging task that’s important to avoid spiraling cloud costs and application performance affairs.

To help with this, PerfectScale furnishes tools that permit teams to optimize the performance of numerous Kubernetes clusters that serve as the basis of their most critical applications. Its software engages advanced, artificial intelligence-based algorithms that assist in evaluating usage patterns and performance and cost metrics, empowering it to optimize its environments constantly to ensure resilience and stability at the minimum possible cost.

Amir Banet, Chief Executive and co-founder of PerfectScale, said system resilience and cost optimization are the biggest priorities for any firm that depends on Kubernetes to strengthen their applications. He explained, “Ineffectively allocating Kubernetes resources may cause performance and overspending problems today, and the problems will persist and get exponentially worse as the application scales. Our mission at PerfectScale is to help organizations get the most out of Kubernetes in an effortless manner by continuously and automatically optimizing each layer of the K8s stack.”

PerfectScale says its platform can aid any flavor of Kubernetes, consisting of Google LLC’s GKE, Microsoft Corp.’s Azure AKS, Amazon Web Services Inc.’s EKS, and bare-metal deployments with no operating software installed on the servers. Its prime features include resiliency risk detention to remove issues affecting the Kubernetes cluster’s performance, waste detection to remove not-so-important cloud costs and issue prioritization, to find and remediate the most pressing issues. It also furnishes analysis tools that assist teams in understanding better how system changes will affect their Kubernetes environments’ durability and cost-effectiveness, plus reports that track optimization progress.

The platform is aimed at businesses that manage distributed, large-scale Kubernetes environments. It is used to manage modern, containerized applications. Optimizing these environments, according to PerfectScale, is a complex, manual, and time-consuming task required to avoid spiraling cloud costs and application performance issues.

The platform was made available in the beta test last October and has been well-received by early adopters. Qwilt Inc., a provider of Open Edge technologies, said it was able to reduce its cloud costs using PerfectScale’s platform significantly.

Tomer Tcherniak, a senior site reliability engineer at Qwilt, said, “PerfectScale has removed critical blindspots we had in our Kubernetes environment. We found out many of our workloads and services were wasting nearly 90% of the resources we allocated. Not only are we significantly reducing costs, but we are also improving system performance to ensure we are giving our customers the best possible experience.”

]]>
https://evaluatesolutions38.com/news/cloud-news/perfectscale-launches-kubernetes-performance-and-cost-optimization-saas-tools/feed/ 0
Akamai Technologies Acquires Ondat to Expand Akamai Connected Cloud https://evaluatesolutions38.com/news/data-news/akamai-technologies-acquires-ondat-to-expand-akamai-connected-cloud/ https://evaluatesolutions38.com/news/data-news/akamai-technologies-acquires-ondat-to-expand-akamai-connected-cloud/#respond Fri, 03 Mar 2023 16:49:04 +0000 https://evaluatesolutions38.com/?p=51333 Highlights:

  • A Kubernetes application’s storage allocation is frequently modified by administrators based on the volume of data it is holding.
  • Ondat offers tools that assist in lowering the risk of data loss in Kubernetes environments in addition to its core feature set.

Ondat, a startup with venture capital backing and a data storage platform for Kubernetes, will be acquired by Akamai Technologies Inc., the company announced recently.

The deployment and maintenance of software container applications are made simpler by Kubernetes. However, it has few features for controlling the data used by those applications. As a result, businesses that use Kubernetes frequently implement external storage tools like the platform from Ondat.

Adam Karon, chief operating officer and divisional manager of the cloud computing division of Akamai, said, “Storage is a key component of cloud computing and Ondat’s technology will enhance Akamai’s storage capabilities, allowing us to offer a fundamentally different approach to cloud that integrates core and distributed computing sites with a massively scaled edge network.”

London-based startup Ondat, formally known as StorageOS Inc., has received USD 20 million in funding. Major corporations like DHL and Lloyds Bank plc, one of the biggest financial institutions in the UK, are among its clients. The deal’s financial details were kept private by the companies.

A Kubernetes application’s storage allocation is frequently modified by administrators based on the volume of data it is holding. Ondat greatly automates the procedure, which reduces the time spent. Its platform also compresses and encrypts data from applications to conserve storage space.

Ondat offers tools that assist in lowering the risk of data loss in Kubernetes environments in addition to its core feature set. Using a checksum algorithm, the startup’s platform automatically checks a company’s data for potential errors. Additionally, it enables businesses to make numerous standby copies of their data and bring them online if the primary copy is unavailable due to an outage.

Multiple servers are typically present in Kubernetes clusters. Performance may be slowed when Kubernetes deploys an application and its data to two servers. When an application is set up in this way, it must make time-consuming network requests to get the data it needs to perform processing tasks.

Ondat’s platform can assign each workload to the server that houses its data to maximize performance. By eliminating the need for data to cross the network, performance is improved. The platform accelerates the procedure when data travel is unavoidable by compressing network traffic.

Ondat is one of many storage platforms targeted at Kubernetes that are available. The startup claims that the platform’s emphasis on usability is one of its key differentiators. The software can be configured with just a few commands and is packaged into a container, which makes deployment relatively simple.

By March 31st, Akamai hopes to have the acquisition completed. The business will incorporate the technology with its just-launched Akamai Connected Cloud platform. The platform is an edge computing service built using resources from Linode LLC, a hosting company in Philadelphia that the business purchased for USD 900 million last year.

Sathya Sankaran, founder and CEO of CloudCasa, said, “The acquisition of Ondat by Akamai is another indication that Kubernetes is entering the mainstream for enterprises deploying stateful business applications on Kubernetes environments in public clouds. Ondat fills the distributed storage management gap in the Linode Kubernetes Environment for the Akamai Connected Cloud.”

The Akamai Connected Cloud, unveiled last month, consists of edge computing sites set up in various global cities. To reduce customer latency, developers can deploy their applications at those locations. To speed up network access, a company with users in a specific city may choose to deploy its application there.

By the end of the year, Akamai wants to have edge computing locations for its Connected Cloud in more than 50 cities worldwide. The business also intends to construct “enterprise-scale core cloud computing sites” in the United States and Europe. They will provide various capabilities, including computing, storage, and database services.

]]>
https://evaluatesolutions38.com/news/data-news/akamai-technologies-acquires-ondat-to-expand-akamai-connected-cloud/feed/ 0
Mirantis Acquires Shipa, an Application Management Startup https://evaluatesolutions38.com/news/cloud-news/mirantis-acquires-shipa-an-application-management-startup/ https://evaluatesolutions38.com/news/cloud-news/mirantis-acquires-shipa-an-application-management-startup/#respond Mon, 30 Jan 2023 21:05:58 +0000 https://evaluatesolutions38.com/?p=51009 Highlights:

  • Aligning the configuration process helps software teams to integrate new applications quicker, thereby making the software deployment process rapid by 38%.
  • Mirantis also plans on deploying Shipa’s software with its Kubernetes Engine, a commercial Kubernetes distribution, based on the technology procured with the 2019 acquisition of Docker Inc.

Mirantis Inc. acquires venture-backed startup, Shipa Inc., which assists developers to integrate software container applications in the production process easily.

Mirantis, while announcing the deal, mentioned that Shipa’s technology will be incorporated into its Lens application, that the company obtained by acquiring Lakend Labs Inc., another startup in 2020. The Lens helps developers to coordinate numerous Kubernetes environments with the help of a single interface. It is used by almost half the enterprises on the Fortune 100 list.

Reportedly, Shipa was acquired for a value ranging from USD 10 to 30 million. Besides, the startup earlier raised USD 3.75M from other investors.

Generally, bigger enterprises deploy their container applications over a number of Kubernetes clusters. Earlier, developers used to specify the configuration of each cluster distinctly. To resolve this, Shipa provides a solution that consolidates the configuration process to save developers time and effort.

The software team, with an aid of this platform, can generate a script that illustrates configurations to be added in containers. The script might demand all containers to encrypt outbound network connections. As a result, instead of independently configuring the settings of each cluster, developers can uniformly apply the script over multiple Kubernetes clusters.

Shipa states that aligning the configuration process helps software teams to integrate new applications quicker, thereby making the software deployment process rapid by 38%. The platform also eases numerous daily tasks of application maintenance.

“Our goal at Shipa, from the beginning, was to give DevOps and platform engineering teams the capability to choose their own underlying tools with a focus on automation to reduce the complexity of the technology infrastructure required by cloud-native applications,” cited Bruno Andrade, Co-founder, and Chief Executive Officer. “Our technology makes deployment and management of applications and updates much easier and faster by letting developers focus on what they do best and not infrastructure,” added Andrade.

Shipa’s technology will be integrated into Mirantis’ Lens application to manage Kubernetes clusters. With the help of the application, administrators can monitor Kubernetes clusters’ performance, instill configuration changes, and troubleshoot technical issues.

Mirantis also plans on deploying Shipa’s software with its Kubernetes Engine, a commercial Kubernetes distribution, based on the technology procured with the 2019 acquisition of Docker Inc.

Adrian Ionel, Co-founder, and CEO at Mirantis, said, “Shipa’s technology puts groundbreaking application discovery, optimization, security and management capabilities in the hands of Lens users. It will help cloud-native software teams move even faster, freeing them to code and innovate.

]]>
https://evaluatesolutions38.com/news/cloud-news/mirantis-acquires-shipa-an-application-management-startup/feed/ 0
Container Technology: What are the 6 major challenges for its adoption? https://evaluatesolutions38.com/insights/tech/container-technology-what-are-the-6-major-challenges-for-its-adoption/ https://evaluatesolutions38.com/insights/tech/container-technology-what-are-the-6-major-challenges-for-its-adoption/#respond Fri, 20 Jan 2023 19:35:42 +0000 https://evaluatesolutions38.com/?p=50938 Highlights:

  • Containers can be vulnerable to attacks if they are not properly configured and secured. Containers share the host system’s kernel, so any vulnerability in the host system can also affect the containers running on it.
  • Container orchestration tools such as Kubernetes can be complex to set up and manage, particularly for organizations that are new to container technology.

Container technology is not simply any passing technology trend. It was 2019 when migration of legacy applications to containers started to happen as the hottest IT infrastructure trend. Also, it has been extremely appealing to organizations as it makes it easy for development teams to move software around reliably from one environment to other. It appears to be a genuine technology shift that is capable of changing the IT industry forever.

In a report, it was found that application containers will come to see fastest growth as compared to other segments. And an estimated compound annual growth rate of the technology is 40 percent. In other words for developers, it is a method to package an application so it can run with its dependencies isolating it from other processes.

However, there are issues with container technology. And that is – most of the enterprises have traditional infrastructure with major investment and a mature process that run the system, making the world go round. At the same time, the development and DevOps community is growing at a fast pace making it difficult for the container technology to catch. While container technology makes it easy to deploy and build an environment, it is stateless. In addition to it, they are unable to be justified as a complaint and secure. Storage is yet another issue as core of all this.

1. Security vulnerabilities: Containers can be vulnerable to attacks if they are not properly configured and secured. Containers share the host system’s kernel, so any vulnerability in the host system can also affect the containers running on it. Additionally, containers can be targeted directly by attackers, and the use of third-party images can introduce unknown vulnerabilities into the system.

2. Resource contention: Containers are designed to be lightweight and efficient, but when multiple containers are running on a single host, they can compete for resources such as CPU, memory, and storage. This can lead to performance issues, particularly in environments with a large number of containers.

3. Persistent storage: Containers are ephemeral, which means that data stored within a container is not persistent and is lost when the container is stopped or destroyed. This can be a problem for applications that require long-term storage of data.

4. Complexity: Container orchestration tools such as Kubernetes can be complex to set up and manage, particularly for organizations that are new to container technology. This complexity can also make it difficult to troubleshoot issues and maintain a stable and reliable container environment.

5. Limited support for legacy applications: Containerization requires a significant overhaul of an application’s architecture and infrastructure, which can be challenging for legacy applications that were not designed to run in a containerized environment.

6. Integration with existing infrastructure: Container technology is still relatively new and may not be fully integrated with existing infrastructure and tools, such as monitoring and logging systems. This can make it difficult to effectively manage and monitor containerized applications.

Some additional challenges

Networking challenges: Containers within a single host share a single network stack, which can lead to networking issues such as port conflicts and communication problems between containers.

Scaling challenges: Container orchestration tools can make it easier to scale applications, but there are still challenges associated with scaling containers, including resource contention and network configuration.

Deployment challenges: Deploying containers can be complex, particularly in large, distributed environments. It can be difficult to ensure that all containers are deployed correctly and that the application is functioning as expected.

Limited support for certain technologies: Some technologies, such as kernel modules and certain filesystems, may not be fully supported in a containerized environment. This can limit the types of applications that can be run in containers.

Overall, while container technology has many benefits, it is important for organizations to carefully consider these challenges and address them before implementing a containerized environment.

Conclusions

Container management software supports quick DevOps deployment.

Container technology is versatile. Like your desktop PC, a virtual machine takes several minutes to boot up. Since the server OS is already running, container technology can start a container in seconds. This lets containers start and stop, flex up during high demand, and stretch down when not needed.

]]>
https://evaluatesolutions38.com/insights/tech/container-technology-what-are-the-6-major-challenges-for-its-adoption/feed/ 0