Artificial Intelligence – EvaluateSolutions38 https://evaluatesolutions38.com Latest B2B Whitepapers | Technology Trends | Latest News & Insights Thu, 04 May 2023 18:26:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.6 https://dsffc7vzr3ff8.cloudfront.net/wp-content/uploads/2021/11/10234456/fevicon.png Artificial Intelligence – EvaluateSolutions38 https://evaluatesolutions38.com 32 32 Atlassian Integrates Generative AI into Confluence and Jira https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/#respond Mon, 24 Apr 2023 15:03:19 +0000 https://evaluatesolutions38.com/?p=52195 Highlights:

  • Atlassian Corporation Plc became the most recent technology company to incorporate generative artificial intelligence capabilities into its flagship collaborative software offerings.
  • The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove useful to developers.

Atlassian Corporation Plc is the latest technology business to include generative artificial intelligence capabilities in its core collaborative software solutions.

The new technology, Atlassian Intelligence, is partially based on the company’s models acquired through the January 2022 acquisition of Percept.AI. It also utilizes OpenAI LP’s GPT-4 model, notable for powering the ChatGPT chatbot, whose release late last year ignited a virtual AI arms race among major tech companies.

Dozens of software companies have attempted to capitalize on the hype surrounding generative AI, which enables machines to interact with humans and respond in a nearly realistic manner by answering queries, locating information, conducting tasks, and more.

Atlassian Intelligence was developed utilizing large language models at the core of generative AI and operates by constructing “teamwork graphs” that depict the various types of work being performed by teams and their relationships. Atlassian says its open platforms let it incorporate context from third-party apps.

GPT-4, which has been trained on vast amounts of publicly available online text, will be able to assist teams in multiple ways, according to Atlassian, including accelerating work, providing immediate assistance, and fostering a shared comprehension of projects.

In the Confluence collaboration platform, employees can click on any unfamiliar term within a document to receive an explanation and links to other relevant documents. Additionally, users can write queries into a chat area and receive automated responses based on the content of documents uploaded to Confluence. Tell it to generate a summary of a recent meeting and include a link to the transcript, and it will immediately spew out a list of agreed-upon decisions and action items.

In addition, Atlassian Intelligence can compose social media posts regarding a forthcoming product announcement based on the product’s Confluence specifications. Meanwhile, software developers utilizing Jira can swiftly compose a test plan based on the system’s operating system knowledge.

Users of Jira may also utilize a virtual agent that automates support via Slack and Teams. The agent would be able to retrieve information from existing knowledge base articles to assist both agents and end users and summarize previous interactions for newly assigned support agents so they are promptly brought up to speed on an issue.

The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove helpful to developers.

Holger Mueller of Constellation Research Inc. said, “AI is coming to software development — that is unavoidable, and Atlassian doesn’t want to be left behind. It’s a good move, infusing ChatGPT capabilities into Confluence and Jira, because anything that increases the velocity of software developers will be welcomed by enterprises. What will be interesting to see is which of the new features become the most popular and useful.”

Atlassian stated that customers must sign up for a waiting list to access the new features currently available in beta testing mode for its cloud-based products. Some of the new features, such as the virtual agent for Jira Service Management, will be included at no additional cost in Atlassian’s Premium and Enterprise programs.

According to Atlassian, new users who sign up for the beta can anticipate seeing them in the coming months.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/feed/ 0
Comet Offers Rapid Tuning Tools for Large Language Model Development https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/#respond Mon, 24 Apr 2023 14:54:48 +0000 https://evaluatesolutions38.com/?p=52189 Highlights:

  • Comet said that data scientists working on artificial intelligence for natural language processing no longer spend that much time training their models.
  • Prompt Playground is one of the new tools that enables developers to iterate more quickly with various templates and comprehend how prompts affect various scenarios.

Comet ML Inc., a European machine learning operations startup, is adapting its MLOps platform to deal with large language models such as those that enable ChatGPT.

The startup announced adding many “cutting-edge” LLM operations features to its platform to assist development teams in expediting engineering, managing LLM workflows, and improving overall efficiency.

Comet, a startup founded in 2017, positions itself as accomplishing machine learning and artificial intelligence what GitHub accomplished for programming. Data scientists and engineers may automatically track their datasets, code modifications, experimental history, and production models using the company’s platform. Comet says this results in efficiency, transparency, and reproducibility due to its platform.

Comet said that data scientists working on artificial intelligence for natural language processing no longer spend that much time training their models. Instead, they spend much more time developing the appropriate instructions to address newer, more challenging challenges. Existing MLOps systems don’t have the tools necessary to monitor and analyze the performance of these prompts well enough, which is an issue for data scientists.

Gideon Mendels, Chief Executive of Comet, reported, “Since the release of ChatGPT, interest in generative AI has surged, leading to increased awareness of the inner workings of large language models. A crucial factor in the success of training such models is prompt engineering, which involves the careful crafting of effective prompts by data scientists to ensure that the model is trained on high-quality data that accurately reflects the underlying task.”

According to Mendels, prompt engineering is a method of natural language processing used to develop and perfect prompts necessary to elicit accurate responses from models. They are required to prevent “hallucinations,” which occur when AI creates responses.

The CEO stated, “As prompt engineering becomes increasingly complex, the need for robust MLOps practices becomes critical, and that is where Comet steps in. The new features built by Comet help streamline the machine learning lifecycle and ensure effective data management, respectively, resulting in more efficient and reliable AI solutions.”

Vice President and Principal Analyst at Constellation Research Inc. Andy Thurai said that because LLMs are still in the early stages of research, the majority of MLOps systems do not offer any tools for controlling workflows in that area. This is because LLM engineering entails modifying prompts for pre-trained models rather than training new models.

“The challenge is that, because LLMs are so big, the prompts need to be fine-tuned to get proper results. As a result, a huge market for prompt engineering has evolved, which involves experimenting and improving prompts that are inputted to LLMs. The inputs, outputs and the efficiency of these prompts need to be tracked for future analysis of why a certain prompt was chosen over others,” Thurai added.

Comet claimed that its new LLMOps tools are made to perform two tasks. One benefit is that they will speed up iteration for data scientists by giving them access to a playground for quick tuning integrated with experiment management. Additionally, they offer debugging capabilities that allow prompt chain visualization to trace prompt experimentation and decision-making.

Mendels said, “They address the problem of prompt engineering and chaining by providing users with the ability to leverage the latest advancements in prompt management and query models, helping teams to iterate quicker, identify performance bottlenecks, and visualize the internal state of the prompt chains.”

A prompt Playground is a new tool that enables developers to iterate more quickly with various templates and comprehend how prompts affect multiple scenarios. Prompt Usage Tracker, which teams may use to track their usage of prompts to understand their impact on a more granular level, is another debugging tool for prompts, responses, and chains.

Comet also disclosed new partnerships with LangChain Inc. and OpenAI LP, the company behind ChatGPT. According to the company, the OpenAI integration will make it feasible to use GPT-3 and other LLMs, while LangChain will make it easier to construct multilingual models.

“These integrations add significant value to users by empowering data scientists to leverage the full potential of OpenAI’s GPT-3 and enabling users to streamline their workflow and get the most out of their LLM development,” Mendels mentioned.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/feed/ 0
Stability AI Announces the Publication of an Open-source Language Model https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/#respond Mon, 24 Apr 2023 14:34:05 +0000 https://evaluatesolutions38.com/?p=52177 Highlights:

  • The venture intends to develop a succession of language models, the first of which is StableLM. Future installments in the series are expected to feature more intricate architectures.
  • The new StableLM model from Stability AI can perform a comparable set of operations.

StableLM, an open-source language model that can create text and code, was recently released by Stability AI Ltd., an artificial intelligence business.

The venture intends to develop a succession of language models, the first of which is StableLM. Future additions in the series are expected to feature more intricate architectures.

Stability AI, based in London, is supported by USD 101 million in funding. It is best known as the creator of the open-source neural network Stable Diffusion, which can generate images based on text input. A few days before the latest introduction of the StableLM language model, the startup released a significant update to Stable Diffusion.

StableLM is initially available in two versions. The first consists of three billion parameters and the configuration settings determining how a neural network processes data. The second version contains seven billion of these settings.

The more parameters a neural network has, the more tasks it can complete. PaLM, a large language model described by Google LLC last year, is configurable with over 500 billion parameters. It has demonstrated the ability to generate code and text and solve relatively complex mathematical problems.

The new StableLM model from Stability AI can perform comparable operations. However, the startup still needs to disclose specific information regarding the model’s capabilities. Later on, Stability AI intends to publish a technical overview of StableLM.

While the startup did not reveal specific information about StableLM, it did reveal how the model was trained. Stability AI created it using an enhanced version of The Pile, an open-source training dataset. The standard edition of the dataset contains 1.5 trillion tokens, data elements consisting of a few letters each.

StableLM is licensed under the CC BY-SA 4.0 open-source license. The model can be used in research and commercial endeavors, and its code can be modified as needed.

Stability AI stated in a blog post, “We open-source our models to promote transparency and foster trust. Researchers can ‘look under the hood’ to verify performance, work on interpretability techniques, identify potential risks, and help develop safeguards. Organizations across the public and private sectors can adapt (‘fine-tune’) these open-source models for their own applications.”

Stability AI released five StableLM variations trained on datasets other than The Pile. Training a model of artificial intelligence on additional data enables it to incorporate more information into its responses and perform new tasks. The five specialized variants of StableLM might be restricted to use in academic research.

Dolly, a collection of 15,000 chatbot queries and replies, was among the datasets Stability AI used to train the specialized variants of StableLM. Databricks Inc. released Dolly earlier this month. The dataset was used by Databricks to train an advanced language model available under an open-source license, similar to StableLM.

StableLM is in the alpha phase. This is the first language model that Stability AI intends to disclose. The startup plans to create StableLM variants with 15 billion to 65 billion parameters as part of its development plan.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/feed/ 0
Didimo Launches Generative AI-backed Tool for Creating Video Game Characters https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters/#respond Wed, 19 Apr 2023 16:48:24 +0000 https://evaluatesolutions38.com/?p=52098 Highlights:

  • Users can upload template characters from any game to make sure their new avatars reflect the game’s look to maintain consistent designs.
  • The AI-generated game characters from Popul8 are believed to work with other design tools like Mixamo, ARKit, and Amazon Polly, as well as popular graphics engines like Unity and Unreal Engine.

Didimo Inc., a producer of digital human avatar technology, is introducing generative AI to create video game characters.

Didimo claims that with the release of Popul8, game developers now have an easy way to produce hundreds of distinctive gaming characters in a small fraction of the usual time and expense. Popul8 is a generative AI tool that expands on Didimo’s current avatar generation toolkit, reducing the time it takes to produce incredibly lifelike, 3D video game, and metaverse characters with complete control over their look.

Traditionally, creating realistic-looking characters for video games required endless hours, with painstaking attention to detail being an essential step in the design process. Designing animated and varied avatars in a matter of minutes is now possible with Popul8.

Users can upload template characters from any game to make sure their new avatars reflect the game’s look to maintain consistent designs. The appearance of each figure can then be adjusted by artists. Additionally, Popul8 supports “batch creation,” which allows creators to produce a large number of unique characters at once, each with their own random attributes, to fill out entire worlds or levels with non-playable characters.

Sean Cooper, Client Integration Lead at Didimo, said, “By streamlining the character creation process, we’re unlocking the ability for artists to create richer worlds, with character diversity, at a fraction of time and necessary technical resources.”

The AI-generated game characters from Popul8 are believed to work with other design tools like Mixamo, ARKit, and Amazon Polly, as well as popular graphics engines like Unity and Unreal Engine. Additionally, they are memory-optimized, allowing game designers to include hundreds of lifelike characters without having to sacrifice their “memory budget” or the performance of the game.

Holger Mueller, Analyst at Constellation Research Inc. reported that generating video game characters is time-demanding and tiresome. “Can you remember the days when game studios put sensors on humans to capture their motion and replicate it in video games? It was expensive and generative AI can therefore potentially disrupt the entire industry by lowering the cost. It can democratize game creation by empowering many more creators,” he added.

Veronica Orvalho, Chief Executive and Founder of Didimo, mentioned that eliminating the cumbersome task of character development is the benefit of Popul8, thereby enabling developers to emphasize on the real gameplay. “With our continuing focus to make digital spaces available to everyone by allowing more diverse, representative, authentic and personal engagement, this platform is setting the standard for positive uses of AI generation,” she added.

Colossal Order Ltd., Sony Group Corp., Atom Stars, Soleil Ltd., Ceek VR Inc., NOS Communications Inc., and such game developers are already utilizing Popul8, which can now be accessed by everyone.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters/feed/ 0
Didimo Launches Generative AI-backed Tool for Creating Video Game Characters https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters-2/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters-2/#respond Wed, 19 Apr 2023 16:48:24 +0000 https://evaluatesolutions38.com/?p=52098 Highlights:

  • Users can upload template characters from any game to make sure their new avatars reflect the game’s look to maintain consistent designs.
  • The AI-generated game characters from Popul8 are believed to work with other design tools like Mixamo, ARKit, and Amazon Polly, as well as popular graphics engines like Unity and Unreal Engine.

Didimo Inc., a producer of digital human avatar technology, is introducing generative AI to create video game characters.

Didimo claims that with the release of Popul8, game developers now have an easy way to produce hundreds of distinctive gaming characters in a small fraction of the usual time and expense. Popul8 is a generative AI tool that expands on Didimo’s current avatar generation toolkit, reducing the time it takes to produce incredibly lifelike, 3D video game, and metaverse characters with complete control over their look.

Traditionally, creating realistic-looking characters for video games required endless hours, with painstaking attention to detail being an essential step in the design process. Designing animated and varied avatars in a matter of minutes is now possible with Popul8.

Users can upload template characters from any game to make sure their new avatars reflect the game’s look to maintain consistent designs. The appearance of each figure can then be adjusted by artists. Additionally, Popul8 supports “batch creation,” which allows creators to produce a large number of unique characters at once, each with their own random attributes, to fill out entire worlds or levels with non-playable characters.

Sean Cooper, Client Integration Lead at Didimo, said, “By streamlining the character creation process, we’re unlocking the ability for artists to create richer worlds, with character diversity, at a fraction of time and necessary technical resources.”

The AI-generated game characters from Popul8 are believed to work with other design tools like Mixamo, ARKit, and Amazon Polly, as well as popular graphics engines like Unity and Unreal Engine. Additionally, they are memory-optimized, allowing game designers to include hundreds of lifelike characters without having to sacrifice their “memory budget” or the performance of the game.

Holger Mueller, Analyst at Constellation Research Inc. reported that generating video game characters is time-demanding and tiresome. “Can you remember the days when game studios put sensors on humans to capture their motion and replicate it in video games? It was expensive and generative AI can therefore potentially disrupt the entire industry by lowering the cost. It can democratize game creation by empowering many more creators,” he added.

Veronica Orvalho, Chief Executive and Founder of Didimo, mentioned that eliminating the cumbersome task of character development is the benefit of Popul8, thereby enabling developers to emphasize on the real gameplay. “With our continuing focus to make digital spaces available to everyone by allowing more diverse, representative, authentic and personal engagement, this platform is setting the standard for positive uses of AI generation,” she added.

Colossal Order Ltd., Sony Group Corp., Atom Stars, Soleil Ltd., Ceek VR Inc., NOS Communications Inc., and such game developers are already utilizing Popul8, which can now be accessed by everyone.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/didimo-launches-generative-ai-backed-tool-for-creating-video-game-characters-2/feed/ 0
Google Is Planning to Enhance its Existing AI Features and Develop a New Search Engine https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-is-planning-to-enhance-its-existing-ai-features-and-develop-a-new-search-engine/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-is-planning-to-enhance-its-existing-ai-features-and-develop-a-new-search-engine/#respond Wed, 19 Apr 2023 16:43:08 +0000 https://evaluatesolutions38.com/?p=52095 Highlights:

  • According to The Times, Google’s project needs a clearer timeline, making it unclear when the features might go live.
  • The Times reports that Google plans to debut Magi next month before adding more new features in the following months.

When its core business is under the most serious threat in years, Google LLC is reportedly rushing to introduce new features and capabilities in its search engine powered by artificial intelligence.

According to reports, the company is developing a brand-new, AI-powered search engine and is considering updating its current search technology with AI features.

The changes are Google’s response to Samsung Electronics Co. Ltd.’s suggestion that it might stop using Google Search and switch to Microsoft Bing as its default mobile search engine, the New York Times reported recently.

According to The Times, Google could lose Samsung and suffer a loss of revenue of more than USD 3 billion annually. The suggestion allegedly caused widespread “panic” within Google; as a result, forcing the company to scramble to keep up with the surge in demand for technologies like ChatGPT.

As details obtained by the Times from internal emails, Google’s response is to update its search engine as part of a project called “Magi.” According to reports, Google has 160 employees working in “sprint rooms” to develop new AI-powered Google Search features.

Since December of last year, when executives first understood the significance of OpenAI LP’s ChatGPT and how it might present a problem for search, Google is said to have been in a frenzy. When Microsoft Corp. revealed plans to integrate ChatGPT with Bing in February, the threat to Google’s decades-long search market dominance only grew. Sundar Pichai, the CEO of Google, responded by pledging soon to update Google Search with new AI chat features.

A new service that will try to predict what users are looking for before they search is one of the new features Google is developing as part of a “more personalized” experience. According to The Times, Google’s project needs a clearer timeline, making it unclear when the features might go live.

A Chrome feature called “Searchalong” that would scan the website the user is reading and provide contextual information is among the other new features rumored to be in the works. The business is also developing a chatbot that can provide code snippets in response to questions about software engineering. A second chatbot would aid in music discovery. More experimental features, including “GIFI” and “Tivoli Tuto,” are also being developed, allowing users to ask Google Image Search to create images and communicate with a chatbot in a different language.

However, it should be noted that many of these features are only partially original. For instance, There is an existing image generation function in Slides, and Tivoli Tutor sounds a lot like Duolingo Inc.’s learning app.

Google’s apparent panic and haste to enhance the capabilities of its search engine, according to analyst Charles King of Pund-IT Inc., shows how flawed the ad-based search model has grown. He said, “Once upon a time, a search engine’s value was based on the quality of results it delivered, but today it’s likely that the top five or ten results you see for any given search will consist of sponsored ad links from some commercial entity.”

As a result, all internet users could gain from improved search capabilities. King said he would be surprised if Google couldn’t produce new AI-based tools that are at least on par with Microsoft’s, if not superior to them.

King said, “That said, the history of the tech industry is littered with stories of once-unstoppable firms that were undermined by more nimble and advanced competitors. Remember when Microsoft Explorer dominated the browser market to the point that the company was successfully challenged on anti-trust grounds? Then along came Google Chrome. Maybe this is just the latest tale of ‘what goes around, comes around'”.

According to Constellation Research Inc.’s Holger Mueller, who is more upbeat, Google’s plan to create a brand-new search engine based on generative AI makes sense because incremental innovation may only go so far in developing next-generation search. The analyst said, “At the same time, the coming reported updates are a good move as they can hedge against Microsoft Bing’s new AI capabilities. Though Google will in any case need to be cautious, as the verdict is still out on whether or not generative AI can really improve search experiences.”

According to The Times, Google plans on introducing Magi in the coming month before following up with more features in the fall. According to this timeline, more information about Magi might be made public on May 10 during Google I/O 2023. According to reports, Google intends to first make Magi’s features available to 1 million test subjects before making them available to 30 million users by the end of the year. Magi will initially only be made available in the United States.

Google declined to respond to the Times’ claims in a statement directly but claimed that it has been incorporating AI capabilities into Google Search for years through features like Lens and multisearch, among others.

A Google spokesperson said, “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information. Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-is-planning-to-enhance-its-existing-ai-features-and-develop-a-new-search-engine/feed/ 0
Munich Re Ventures Led a USD 15.6M Series A funding Round for Capitola https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/munich-re-ventures-led-a-usd-15-6m-series-a-funding-round-for-capitola/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/munich-re-ventures-led-a-usd-15-6m-series-a-funding-round-for-capitola/#respond Wed, 19 Apr 2023 15:01:40 +0000 https://evaluatesolutions38.com/?p=52092 Highlights:

  • According to the company based in Mountain View, California, Capitola’s platform increases the productivity of brokerage firm employees by eliminating repetitive manual tasks.
  • The platform employs AI to monitor the insurance-related risks in terms of the company’s assets.

Capitola Insurance Services LLC, a startup that uses artificial intelligence to simplify commercial insurance transactions, has raised USD 15.6 million in funding.

Munich Re Ventures led the Series A round, which was announced recently. Lightspeed Venture Partners, a previous Capitola investor, also took part. In 2021, the year it was founded, the startup raised a five-million-dollar seed round.

Businesses do not purchase insurance directly from insurers but rather through a broker, an intermediary. The broker assists in identifying a policy that will meet the company’s needs. It then locates an insurer willing to take on that policy.

According to the company based in Mountain View, California, Capitola’s platform provides a software platform that helps simplify commercial insurance brokers’ lives. According to the company, its platform increases the productivity of brokerage firm employees by eliminating repetitive manual tasks.

Businesses frequently purchase insurance for multiple assets, such as servers and manufacturing equipment. Each of these items poses a different level of danger. A commercial insurance provider will only cover an asset if the risk level of that asset is aligning with its internal policies.

One of the most difficult aspects of insurance brokers’ work is finding an insurer willing to cover a company’s assets. Capitola claims that its platform can make the job easier. The platform employs AI to monitor the insurance-related risks in terms of the company’s assets. Then it matches those risks with insurers willing to underwrite policies at that risk level.

When deciding on an insurer, a broker considers several factors. Capitola claims that its platform provides data that enables brokers to perform such evaluations more quickly.

A commercial insurance policy has an attachment point specifying the maximum loss the insurance company will cover. This metric influences the monthly fee that a business must pay for its insurance policy. Capitola’s platform includes an analytics tool that enables users to compare between different attachment points and their associated monthly payments to find the best combination.

It also offers access to other data, such as how much a broker can make from selling a particular insurance policy. The data is visualized in charts by the company to facilitate user analysis.

Capitola claims that its platform can assist brokers in selling new policies and renewing existing ones. When a policy is about to expire, it alerts brokers and allows them to request a renewal from the insurer underwriting that policy. It also includes features that allow you to switch to a new insurance provider if necessary.

Capitola’s platform also includes additional productivity features. It includes a task assignment tool that allows managers to coordinate who oversees which aspects of a commercial insurance transaction. The employees of the broker’s office can generate a proposal automatically, when a business plans to buy a fresh policy or renew an existing one.

Co-founder and CEO Sivan Iram said, “The insurance industry has seen many technological advancements over the years, but very little attention has been given to the insurance professionals and the tools they use. Our platform brings together brokers and underwriters, removing many of the operational inefficiencies around manual processes and repetitive tasks to allow them to focus on what they do best.”

Capitola plans to use a part of the recent USD 15.6 million round to expand its presence in the United States. It also intends to add new features, such as market intelligence tools for brokers.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/munich-re-ventures-led-a-usd-15-6m-series-a-funding-round-for-capitola/feed/ 0
Deloitte Creates a New Practice to Assist Businesses in Deploying Generative AI https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/#respond Fri, 14 Apr 2023 18:56:07 +0000 https://evaluatesolutions38.com/?p=52068 Highlights:

  • Massive advances in AI technology are being driven by the emergence of accelerated computing, which is forcing businesses across industries to rethink their products and business models fundamentally.
  • According to Deloitte, generative AI has opened up a wide range of new market applications and can significantly increase business productivity.

Deloitte Touche Tohmatsu Ltd., an information technology consulting company, announced recently that it is launching a new practice to assist businesses in leveraging generative artificial intelligence, the hottest trend in the sector.

Because it can create new content based on brief text descriptions, this trend has recently taken over the news. Generative AI is powered by a new generation of chatbots and virtual assistants, most notably OpenAI LP’s ChatGPT, which can converse, create artwork, write code, and more.

Businesses are eager to adopt the generative AI trend and learn how the technology can improve their operational efficiency and financial performance. Still, with so much discussion surrounding the subject, figuring out where to begin can take time. Deloitte believes it can assist in this area by giving enterprise leaders the deep AI industry experience they require to develop their generative AI strategies.

According to Deloitte, generative AI has opened up a slew of new industry applications and has the potential to greatly boost company efficiency. While developing, implementing, and operationalizing new applications based on fundamental AI models, many people require assistance.

The new practice at Deloitte will be dedicated to assisting businesses in implementing both custom-built solutions and those provided by third parties. The Generative AI Market Incubator, a group of engineers devoted to fast designing and launching generative AI pilot applications, is one of its major components.

In order to train and improve foundation models, Deloitte also established a research and development team that will collaborate with its alliance partners, the company said. Due to Deloitte’s early adoption of generative AI and its acquisitions of startups like HashedIn Technologies and Intellify Inc., both teams are said to have extensive experience in AI and the cloud.

Additionally, the new practice will collaborate with the Deloitte AI Academy. Its purpose is to train thousands of people in various new AI skills, including model development and prompt engineering, and to close the talent gap in AI.

Deloitte cited its most recent AI Dossier report, which outlines some situations in which generative AI can be used to good effect almost immediately. They include, among other things, supply chain optimization, fraud detection, and smart factories. With the help of its new practice, clients will be helped as they deploy applications in these fields and as they navigate ethical, legal, and policy issues.

Holger Mueller, an analyst with Constellation Research Inc., argued that Deloitte made a wise decision by taking this action because businesses will require assistance in implementing and maintaining the most recent AI tools.

Mueller added, “Of course, that help is going to come from the traditional system integrators like Deloitte, which is starting with an incubator. It is land grab time for the AI services category as many things are in flux.”

The cliche that “generative AI is transforming the way we work” was echoed by Jason Girzadas, managing principal of businesses, global, and strategic services at Deloitte U.S. and the firm’s incoming Chief Executive. Deloitte is prepared to assist its clients as they “develop and deploy new and innovative AI-fueled solutions,” he said, as businesses look to adopt the trend.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/feed/ 0
Amazon Joins the Generative AI Race with Bedrock https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/#respond Fri, 14 Apr 2023 18:51:01 +0000 https://evaluatesolutions38.com/?p=52065 Highlights:

  • Developers can save a lot of time and money by using pre-trained foundation models instead of having to start from scratch when training a language model.
  • The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization.

Amazon Web Services Inc. has recently expanded its reach into artificial intelligence software development by releasing several new tools for generative AI training and deployment on its cloud platform.

The business described new services in a post on the AWS Machine Learning blog, including the capacity to build and train foundation models, which are extensive, pre-trained language models that lay the groundwork for particular natural language processing tasks.

Deep learning techniques are generally used to train foundation models on enormous volumes of text data, enabling them to become adept at understanding the subtleties of human language and produce content nearly indistinguishable from that written by humans.

When training a language model, developers can save time and money using pre-trained foundation models instead of starting from scratch. A foundation model for text generation, sentiment analysis, and language translation is the Generative Pre-trained Transformer (GPT) from OpenAI LLC.

LLM Choices

Bedrock’s brand-new service makes foundation models from various sources accessible through an API. The Jurassic-2 multilingual large language models from AI21 Labs Ltd., which produce text in Spanish, French, German, Portuguese, Italian, and Dutch, and Anthropic’s PBC’s Claude LLM, a conversational and text processing system that follows moral AI system training principles are included. Users can use the API to access Stability AI Ltd. and Amazon LLMs.

According to Swami Sivasubramanian, Vice President of database, analytics, and machine learning at AWS, foundation models are pre-trained at the internet scale. They can therefore be customized with comparatively little additional training. He used the example of a fashion retailer’s content marketing manager, who could give Bedrock as few as 20 examples of effective taglines from past campaign examples with relevant product descriptions. Bedrock will then automatically generate effective social media posts, display ad images, and web copy for the new products.

In addition to the Bedrock announcement, AWS is releasing two new Titan large language models. The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization. The second LLM converts text prompts into numerical representations, including the meaning of the text and helps build contextual responses beyond paraphrasing.

No mention of OpenAI, in which Microsoft Corp. is a significant investor, was made in the announcement. Still, given the market’s demand for substantial language models, this shouldn’t be a problem for Amazon.

Although AWS is behind Microsoft and Google LLC in bringing its LLM to market, Kandaswamy argued that this shouldn’t be considered a competitive disadvantage. He said, “I don’t think anyone is so behind that they have to play catchup. It might appear that there is a big race, but the customers we speak with, other than very early adopters, have no idea what to do with it.”

Hardware Boost

Additionally, AWS is upgrading its hardware to provide training and inference on its cloud. New, network-optimized EC2 Trn1n instances now offer 1,600 gigabits per second of network bandwidth, or about a 20% performance increase, and feature the company’s exclusive Trainium and Inferentia2 processors. Additionally, the business’s Inf2 instances, which use Inferentia2 for inferencing of massively multi-parameter generative AI applications, are now generally available.

CodeWhisperer, an AI coding companion that uses a foundation model to produce code suggestions in real-time based on previous code and natural language comments in an integrated development environment, is another product whose availability has been announced. The tool is accessible from some IDEs and supports Python, Java, JavaScript, TypeScript, C#, and ten other languages.

Sivasubramanian wrote, “Developers can simply tell CodeWhisperer to do a task, such as ‘parse a CSV string of songs’ and ask it to return a structured list based on values such as artist, title and highest chart rank.” CodeWhisperer produces “an entire function that parses the string and returns the list as specified.” He said that developers who used the preview version reported improvement of 57% in speed with a 27% higher success rate.

As many players attempt to capitalize on the success of proofs of a concept like ChatGPT, the LLM landscape will likely remain dispersed and chaotic for the foreseeable future. As Google’s Natural Language API has in speech recognition, it’s unlikely that any one model will come to dominate the market, according to Kandaswamy.

He said, “Just because a model is good at one thing doesn’t mean it’s going to be good with everything. It’s possible over two or three years everybody will offer everybody else’s model. There will be more blending and cross-technology relationships.”

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/feed/ 0
Google Launches Cloud-based Claims Acceleration Suite and a Medical AI Model https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/#respond Fri, 14 Apr 2023 17:21:07 +0000 https://evaluatesolutions38.com/?p=52056 Highlights:

  • Google LLC recently unveiled Med-PaLM 2, a neural network capable of answering medical test questions, and presented a cloud-based automation toolbox for healthcare businesses.
  • According to Google, the AI obtained a score of 85%, 18% higher than a prior-generation neural network dubbed Med-PaLM.

Google LLC recently unveiled Med-PaLM 2, a neural network capable of answering medical test questions, and presented a cloud-based automation toolbox for healthcare businesses.

The innovations were unveiled during the company’s annual The Check Up healthcare event.

Advances in AI

The announcement of Med-PaLM 2 was the first significant highlight of Google’s healthcare event. Med-PaLM 2 is a novel artificial intelligence model developed internally by Google. It can accept medical queries as input and provide comprehensive responses in natural language. Google claims that AI can also elucidate the reasoning behind its answers.

Med-PaLM 2’s accuracy was evaluated by having it answer a succession of questions like the United States Medical Licensing Examination. According to Google, the AI scored 85%, 18% higher than a prior-generation neural network dubbed Med-PaLM. According to the company, the efficacy of Med-PaLM 2 “far surpasses” comparable AI models from other companies.

In the coming weeks, Google’s cloud division intends to make Med-PaLM 2 available to a limited number of customers. According to the search behemoth, the objective is to determine how the model could be implemented in the medical field.

Aashima Gupta and Amy Waldron, Google Cloud executives, stated that Google hopes to “understand how Med-PaLM 2 might be used to facilitate rich, informative discussions, answer complex medical questions, and find insights in complicated and unstructured medical texts. They might also explore its utility to help draft short- and long-form responses and summarize documentation and insights from internal data sets and bodies of scientific knowledge.”

Med-PaLM 2 is one of various AI models developed by Google to assist medical professionals in their work. It collaborates with numerous healthcare organizations to improve its research in this field. In addition to announcing Med-PaLM 2, the company also announced four new healthcare partnerships.

The first collaboration is with an “AI-based organization” directed by the non-profit organization Right to Care. The focus of the collaboration is to make AI-powered tuberculosis screenings broadly accessible in Sub-Saharan Africa. Google reports that its partners have pledged to donate 100,000 complimentary screenings.

The three additional healthcare AI partnerships are with Kenyan non-profit Jacaranda Health, Chang Gung Memorial Hospital of Taiwan, and Mayo Clinic. The previous two collaborations focus on interpreting ultrasound images using machine learning. The partnership between Google and the Mayo Clinic seeks to develop an artificial intelligence (AI) model that can help physicians plan radiotherapy treatments faster.

The New Claims Acceleration Suite

In addition to its new partnerships and Med-PaLM 2 model, Google Cloud announced the Claims Acceleration Suite. It reduces administrative labor for healthcare organizations via the use of AI. The offering utilizes multiple existing Google Cloud services, including the Document AI API for document information extraction.

The Claims Acceleration Suite is intended to accelerate two frequent healthcare administration tasks. The first is claims processing, while the second is a prior authorization for health insurance. At launch, the offering only supports the second use case.

Health insurance prior authorization evaluates the medical necessity of a treatment plan. Examining medical records and other patient information that is frequently dispersed across multiple documents is required for the evaluation. According to Google, preparing this data for processing requires substantial manual labor.

The Claims Acceleration Suite is intended to accelerate the activity. It can transform medical data in unstructured files, such as PDFs, into a structured format more amenable to processing. In addition, the offering provides a search tool for medical professionals to peruse the collected data.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/feed/ 0