AI – EvaluateSolutions38 https://evaluatesolutions38.com Latest B2B Whitepapers | Technology Trends | Latest News & Insights Thu, 04 May 2023 18:17:00 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.6 https://dsffc7vzr3ff8.cloudfront.net/wp-content/uploads/2021/11/10234456/fevicon.png AI – EvaluateSolutions38 https://evaluatesolutions38.com 32 32 Atlassian Integrates Generative AI into Confluence and Jira https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/#respond Mon, 24 Apr 2023 15:03:19 +0000 https://evaluatesolutions38.com/?p=52195 Highlights:

  • Atlassian Corporation Plc became the most recent technology company to incorporate generative artificial intelligence capabilities into its flagship collaborative software offerings.
  • The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove useful to developers.

Atlassian Corporation Plc is the latest technology business to include generative artificial intelligence capabilities in its core collaborative software solutions.

The new technology, Atlassian Intelligence, is partially based on the company’s models acquired through the January 2022 acquisition of Percept.AI. It also utilizes OpenAI LP’s GPT-4 model, notable for powering the ChatGPT chatbot, whose release late last year ignited a virtual AI arms race among major tech companies.

Dozens of software companies have attempted to capitalize on the hype surrounding generative AI, which enables machines to interact with humans and respond in a nearly realistic manner by answering queries, locating information, conducting tasks, and more.

Atlassian Intelligence was developed utilizing large language models at the core of generative AI and operates by constructing “teamwork graphs” that depict the various types of work being performed by teams and their relationships. Atlassian says its open platforms let it incorporate context from third-party apps.

GPT-4, which has been trained on vast amounts of publicly available online text, will be able to assist teams in multiple ways, according to Atlassian, including accelerating work, providing immediate assistance, and fostering a shared comprehension of projects.

In the Confluence collaboration platform, employees can click on any unfamiliar term within a document to receive an explanation and links to other relevant documents. Additionally, users can write queries into a chat area and receive automated responses based on the content of documents uploaded to Confluence. Tell it to generate a summary of a recent meeting and include a link to the transcript, and it will immediately spew out a list of agreed-upon decisions and action items.

In addition, Atlassian Intelligence can compose social media posts regarding a forthcoming product announcement based on the product’s Confluence specifications. Meanwhile, software developers utilizing Jira can swiftly compose a test plan based on the system’s operating system knowledge.

Users of Jira may also utilize a virtual agent that automates support via Slack and Teams. The agent would be able to retrieve information from existing knowledge base articles to assist both agents and end users and summarize previous interactions for newly assigned support agents so they are promptly brought up to speed on an issue.

The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove helpful to developers.

Holger Mueller of Constellation Research Inc. said, “AI is coming to software development — that is unavoidable, and Atlassian doesn’t want to be left behind. It’s a good move, infusing ChatGPT capabilities into Confluence and Jira, because anything that increases the velocity of software developers will be welcomed by enterprises. What will be interesting to see is which of the new features become the most popular and useful.”

Atlassian stated that customers must sign up for a waiting list to access the new features currently available in beta testing mode for its cloud-based products. Some of the new features, such as the virtual agent for Jira Service Management, will be included at no additional cost in Atlassian’s Premium and Enterprise programs.

According to Atlassian, new users who sign up for the beta can anticipate seeing them in the coming months.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/feed/ 0
Comet Offers Rapid Tuning Tools for Large Language Model Development https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/#respond Mon, 24 Apr 2023 14:54:48 +0000 https://evaluatesolutions38.com/?p=52189 Highlights:

  • Comet said that data scientists working on artificial intelligence for natural language processing no longer spend that much time training their models.
  • Prompt Playground is one of the new tools that enables developers to iterate more quickly with various templates and comprehend how prompts affect various scenarios.

Comet ML Inc., a European machine learning operations startup, is adapting its MLOps platform to deal with large language models such as those that enable ChatGPT.

The startup announced adding many “cutting-edge” LLM operations features to its platform to assist development teams in expediting engineering, managing LLM workflows, and improving overall efficiency.

Comet, a startup founded in 2017, positions itself as accomplishing machine learning and artificial intelligence what GitHub accomplished for programming. Data scientists and engineers may automatically track their datasets, code modifications, experimental history, and production models using the company’s platform. Comet says this results in efficiency, transparency, and reproducibility due to its platform.

Comet said that data scientists working on artificial intelligence for natural language processing no longer spend that much time training their models. Instead, they spend much more time developing the appropriate instructions to address newer, more challenging challenges. Existing MLOps systems don’t have the tools necessary to monitor and analyze the performance of these prompts well enough, which is an issue for data scientists.

Gideon Mendels, Chief Executive of Comet, reported, “Since the release of ChatGPT, interest in generative AI has surged, leading to increased awareness of the inner workings of large language models. A crucial factor in the success of training such models is prompt engineering, which involves the careful crafting of effective prompts by data scientists to ensure that the model is trained on high-quality data that accurately reflects the underlying task.”

According to Mendels, prompt engineering is a method of natural language processing used to develop and perfect prompts necessary to elicit accurate responses from models. They are required to prevent “hallucinations,” which occur when AI creates responses.

The CEO stated, “As prompt engineering becomes increasingly complex, the need for robust MLOps practices becomes critical, and that is where Comet steps in. The new features built by Comet help streamline the machine learning lifecycle and ensure effective data management, respectively, resulting in more efficient and reliable AI solutions.”

Vice President and Principal Analyst at Constellation Research Inc. Andy Thurai said that because LLMs are still in the early stages of research, the majority of MLOps systems do not offer any tools for controlling workflows in that area. This is because LLM engineering entails modifying prompts for pre-trained models rather than training new models.

“The challenge is that, because LLMs are so big, the prompts need to be fine-tuned to get proper results. As a result, a huge market for prompt engineering has evolved, which involves experimenting and improving prompts that are inputted to LLMs. The inputs, outputs and the efficiency of these prompts need to be tracked for future analysis of why a certain prompt was chosen over others,” Thurai added.

Comet claimed that its new LLMOps tools are made to perform two tasks. One benefit is that they will speed up iteration for data scientists by giving them access to a playground for quick tuning integrated with experiment management. Additionally, they offer debugging capabilities that allow prompt chain visualization to trace prompt experimentation and decision-making.

Mendels said, “They address the problem of prompt engineering and chaining by providing users with the ability to leverage the latest advancements in prompt management and query models, helping teams to iterate quicker, identify performance bottlenecks, and visualize the internal state of the prompt chains.”

A prompt Playground is a new tool that enables developers to iterate more quickly with various templates and comprehend how prompts affect multiple scenarios. Prompt Usage Tracker, which teams may use to track their usage of prompts to understand their impact on a more granular level, is another debugging tool for prompts, responses, and chains.

Comet also disclosed new partnerships with LangChain Inc. and OpenAI LP, the company behind ChatGPT. According to the company, the OpenAI integration will make it feasible to use GPT-3 and other LLMs, while LangChain will make it easier to construct multilingual models.

“These integrations add significant value to users by empowering data scientists to leverage the full potential of OpenAI’s GPT-3 and enabling users to streamline their workflow and get the most out of their LLM development,” Mendels mentioned.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/comet-offers-rapid-tuning-tools-for-large-language-model-development/feed/ 0
Fetch.ai Introduces AI Trading Platforms for Decentralized Crypto Exchanges https://evaluatesolutions38.com/news/tech-news/blockchain-news/fetch-ai-introduces-ai-trading-platforms-for-decentralized-crypto-exchanges/ https://evaluatesolutions38.com/news/tech-news/blockchain-news/fetch-ai-introduces-ai-trading-platforms-for-decentralized-crypto-exchanges/#respond Mon, 24 Apr 2023 14:43:38 +0000 https://evaluatesolutions38.com/?p=52183 Highlights:

  • With the help of AI agents, the new platform will be able to carry out trades on users’ behalf, ensuring the best possible trade outcomes and minimizing the need for manual interaction.
  • A USD 40 million fundraising round led by DWF Labs was recently completed by the business in order to accelerate the creation of AI and autonomous agents.

Fetch.ai Ltd., an artificial intelligence laboratory based in Cambridge that develops AI-powered agents for peer-to-peer applications, has announced the development of new trading tools for decentralized cryptocurrency exchanges.

With the help of AI agents, the new platform will be able to carry out trades on users’ behalf, ensuring the best possible trade outcomes and minimizing the need for manual interaction. At the same time, autonomous agents may be programmed with user preferences and fine-tune tactics based on market circumstances, allowing users to communicate in a peer-to-peer fashion across marketplaces.

Decentralized exchanges are components of the more extensive decentralized finance (DeFi) economy, a token economy based on blockchain technology that enables direct peer-to-peer transactions between users. The business claims that this creates the potential for Fetch.ai’s machine learning algorithms to track market circumstances and link customers and sellers for optimum impact.

It follows that transactions occur with one-to-one smart contracts on the blockchain instead of big liquidity pools involving several trades and users because each seller and buyer is directly connected. Hackers and insider exit schemes known as “rugpulls,” in which the owner of the crypto wallet just takes all the tokens, are clearly after large pools of cryptocurrency tokens.

Humayun Sheikh, Chief Executive of Fetch.ai, said, “As we stand at the forefront of a new era in the DeFi sector, with rapidly evolving technologies and innovations, we recognize the need to go deeper into decentralization. AI agent-based trading has enormous potential to remove central points of failure and solve some of DeFi’s biggest problems such as liquidity contract hacks and rugpulls, which cost the industry billions of dollars a year.”

According to research by De.Fi Security, which monitors these trends, crypto protocols, and marketplaces, lost more than USD 452 million to scams and hacks during the first quarter of 2023. Despite the size of these figures, they are far less than the USD 1.3 billion in losses experienced during the same time period in 2022. Many of these crimes and losses result from vulnerabilities in cryptographic protocols and blockchain smart contracts.

The business recently completed a USD 40 million fundraising round led by DWF Labs to accelerate the development of AI and autonomous agents. Through the Amadeus global distribution system, Fetch.ai has already created autonomous AI travel agents that can link customers to more than 770,00 hotels globally and make reservations on their behalf. It also tested a smart parking space management program in Germany to balance the supply and demand for parking spaces.

The new tools from Fetch.ai will go on general sale in the second quarter of this year. According to the business, these products will be the first of their type to be sold.

]]>
https://evaluatesolutions38.com/news/tech-news/blockchain-news/fetch-ai-introduces-ai-trading-platforms-for-decentralized-crypto-exchanges/feed/ 0
Crowdstrike Turns to Managed XDR to Assist Organizations in Navigating the Cyber Skills Gap https://evaluatesolutions38.com/news/security-news/crowdstrike-turns-to-managed-xdr-to-assist-organizations-in-navigating-the-cyber-skills-gap/ https://evaluatesolutions38.com/news/security-news/crowdstrike-turns-to-managed-xdr-to-assist-organizations-in-navigating-the-cyber-skills-gap/#respond Mon, 24 Apr 2023 14:41:40 +0000 https://evaluatesolutions38.com/?p=52180 Highlights:

  • Falcon Complete XDR can support teams with varying skill levels and help eliminate data and organizational silos to stop cyber adversaries.
  • As part of CrowdStrike’s “better-together strategy” for bringing XDR to organizations of all sizes, the partnership between partners and CrowdStrike is said to have been successful in the MDR market.

CrowdStrike Holdings Inc., a company specializing in cybersecurity, has introduced a new managed extended detection and response service called Falcon Complete XDR, which combines the power of human expertise with AI automation and threat intelligence. This service bridges the cybersecurity skills gap by offering 24/7 expert management, threat hunting and amp; monitoring, and end-to-end remediation across all important attack surfaces.

Falcon Complete XDR can support teams with varying skill levels and help break down data and organizational silos to stop cyber adversaries. The service addresses the challenge faced by almost half of all organizations who believe they need more security operations skills. Additionally, a massive cybersecurity workforce gap of 3.4 million individuals makes it difficult for companies to hire the necessary staff to implement a robust security program.

Tom Etheridge, the Chief Global Services Officer of CrowdStrike stated, “With Managed XDR services, organizations can entrust the implementation, management, response and end-to-end remediation of advanced threats across multiple vendors and attack surfaces.” He said the company can provide that without the “burden, overhead, or costs of deploying and managing a 24/7 threat detection and response function on their own.”

CrowdStrike highlighted the Partner-Delivered Managed XDR Services with the introduction of Falcon Complete XDR. To provide MXDR services to their clients, partners use the Falcon platform.

As part of CrowdStrike’s “better-together strategy” for bringing XDR to organizations of all sizes, the collaboration between CrowdStrike and its partners is said to have been successful in the MDR market. Delivering MXDR services powered by CrowdStrike has benefited top international system integrators and managed security service providers. BT Group plc, ReliaQuest LLC, Red Canary Inc., Eviden, and Telefonica Tech S.A. are notable partners.

]]>
https://evaluatesolutions38.com/news/security-news/crowdstrike-turns-to-managed-xdr-to-assist-organizations-in-navigating-the-cyber-skills-gap/feed/ 0
Stability AI Announces the Publication of an Open-source Language Model https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/#respond Mon, 24 Apr 2023 14:34:05 +0000 https://evaluatesolutions38.com/?p=52177 Highlights:

  • The venture intends to develop a succession of language models, the first of which is StableLM. Future installments in the series are expected to feature more intricate architectures.
  • The new StableLM model from Stability AI can perform a comparable set of operations.

StableLM, an open-source language model that can create text and code, was recently released by Stability AI Ltd., an artificial intelligence business.

The venture intends to develop a succession of language models, the first of which is StableLM. Future additions in the series are expected to feature more intricate architectures.

Stability AI, based in London, is supported by USD 101 million in funding. It is best known as the creator of the open-source neural network Stable Diffusion, which can generate images based on text input. A few days before the latest introduction of the StableLM language model, the startup released a significant update to Stable Diffusion.

StableLM is initially available in two versions. The first consists of three billion parameters and the configuration settings determining how a neural network processes data. The second version contains seven billion of these settings.

The more parameters a neural network has, the more tasks it can complete. PaLM, a large language model described by Google LLC last year, is configurable with over 500 billion parameters. It has demonstrated the ability to generate code and text and solve relatively complex mathematical problems.

The new StableLM model from Stability AI can perform comparable operations. However, the startup still needs to disclose specific information regarding the model’s capabilities. Later on, Stability AI intends to publish a technical overview of StableLM.

While the startup did not reveal specific information about StableLM, it did reveal how the model was trained. Stability AI created it using an enhanced version of The Pile, an open-source training dataset. The standard edition of the dataset contains 1.5 trillion tokens, data elements consisting of a few letters each.

StableLM is licensed under the CC BY-SA 4.0 open-source license. The model can be used in research and commercial endeavors, and its code can be modified as needed.

Stability AI stated in a blog post, “We open-source our models to promote transparency and foster trust. Researchers can ‘look under the hood’ to verify performance, work on interpretability techniques, identify potential risks, and help develop safeguards. Organizations across the public and private sectors can adapt (‘fine-tune’) these open-source models for their own applications.”

Stability AI released five StableLM variations trained on datasets other than The Pile. Training a model of artificial intelligence on additional data enables it to incorporate more information into its responses and perform new tasks. The five specialized variants of StableLM might be restricted to use in academic research.

Dolly, a collection of 15,000 chatbot queries and replies, was among the datasets Stability AI used to train the specialized variants of StableLM. Databricks Inc. released Dolly earlier this month. The dataset was used by Databricks to train an advanced language model available under an open-source license, similar to StableLM.

StableLM is in the alpha phase. This is the first language model that Stability AI intends to disclose. The startup plans to create StableLM variants with 15 billion to 65 billion parameters as part of its development plan.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stability-ai-announces-the-publication-of-an-open-source-language-model/feed/ 0
Amazon Joins the Generative AI Race with Bedrock https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/#respond Fri, 14 Apr 2023 18:51:01 +0000 https://evaluatesolutions38.com/?p=52065 Highlights:

  • Developers can save a lot of time and money by using pre-trained foundation models instead of having to start from scratch when training a language model.
  • The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization.

Amazon Web Services Inc. has recently expanded its reach into artificial intelligence software development by releasing several new tools for generative AI training and deployment on its cloud platform.

The business described new services in a post on the AWS Machine Learning blog, including the capacity to build and train foundation models, which are extensive, pre-trained language models that lay the groundwork for particular natural language processing tasks.

Deep learning techniques are generally used to train foundation models on enormous volumes of text data, enabling them to become adept at understanding the subtleties of human language and produce content nearly indistinguishable from that written by humans.

When training a language model, developers can save time and money using pre-trained foundation models instead of starting from scratch. A foundation model for text generation, sentiment analysis, and language translation is the Generative Pre-trained Transformer (GPT) from OpenAI LLC.

LLM Choices

Bedrock’s brand-new service makes foundation models from various sources accessible through an API. The Jurassic-2 multilingual large language models from AI21 Labs Ltd., which produce text in Spanish, French, German, Portuguese, Italian, and Dutch, and Anthropic’s PBC’s Claude LLM, a conversational and text processing system that follows moral AI system training principles are included. Users can use the API to access Stability AI Ltd. and Amazon LLMs.

According to Swami Sivasubramanian, Vice President of database, analytics, and machine learning at AWS, foundation models are pre-trained at the internet scale. They can therefore be customized with comparatively little additional training. He used the example of a fashion retailer’s content marketing manager, who could give Bedrock as few as 20 examples of effective taglines from past campaign examples with relevant product descriptions. Bedrock will then automatically generate effective social media posts, display ad images, and web copy for the new products.

In addition to the Bedrock announcement, AWS is releasing two new Titan large language models. The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization. The second LLM converts text prompts into numerical representations, including the meaning of the text and helps build contextual responses beyond paraphrasing.

No mention of OpenAI, in which Microsoft Corp. is a significant investor, was made in the announcement. Still, given the market’s demand for substantial language models, this shouldn’t be a problem for Amazon.

Although AWS is behind Microsoft and Google LLC in bringing its LLM to market, Kandaswamy argued that this shouldn’t be considered a competitive disadvantage. He said, “I don’t think anyone is so behind that they have to play catchup. It might appear that there is a big race, but the customers we speak with, other than very early adopters, have no idea what to do with it.”

Hardware Boost

Additionally, AWS is upgrading its hardware to provide training and inference on its cloud. New, network-optimized EC2 Trn1n instances now offer 1,600 gigabits per second of network bandwidth, or about a 20% performance increase, and feature the company’s exclusive Trainium and Inferentia2 processors. Additionally, the business’s Inf2 instances, which use Inferentia2 for inferencing of massively multi-parameter generative AI applications, are now generally available.

CodeWhisperer, an AI coding companion that uses a foundation model to produce code suggestions in real-time based on previous code and natural language comments in an integrated development environment, is another product whose availability has been announced. The tool is accessible from some IDEs and supports Python, Java, JavaScript, TypeScript, C#, and ten other languages.

Sivasubramanian wrote, “Developers can simply tell CodeWhisperer to do a task, such as ‘parse a CSV string of songs’ and ask it to return a structured list based on values such as artist, title and highest chart rank.” CodeWhisperer produces “an entire function that parses the string and returns the list as specified.” He said that developers who used the preview version reported improvement of 57% in speed with a 27% higher success rate.

As many players attempt to capitalize on the success of proofs of a concept like ChatGPT, the LLM landscape will likely remain dispersed and chaotic for the foreseeable future. As Google’s Natural Language API has in speech recognition, it’s unlikely that any one model will come to dominate the market, according to Kandaswamy.

He said, “Just because a model is good at one thing doesn’t mean it’s going to be good with everything. It’s possible over two or three years everybody will offer everybody else’s model. There will be more blending and cross-technology relationships.”

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/feed/ 0
Google Launches Cloud-based Claims Acceleration Suite and a Medical AI Model https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/#respond Fri, 14 Apr 2023 17:21:07 +0000 https://evaluatesolutions38.com/?p=52056 Highlights:

  • Google LLC recently unveiled Med-PaLM 2, a neural network capable of answering medical test questions, and presented a cloud-based automation toolbox for healthcare businesses.
  • According to Google, the AI obtained a score of 85%, 18% higher than a prior-generation neural network dubbed Med-PaLM.

Google LLC recently unveiled Med-PaLM 2, a neural network capable of answering medical test questions, and presented a cloud-based automation toolbox for healthcare businesses.

The innovations were unveiled during the company’s annual The Check Up healthcare event.

Advances in AI

The announcement of Med-PaLM 2 was the first significant highlight of Google’s healthcare event. Med-PaLM 2 is a novel artificial intelligence model developed internally by Google. It can accept medical queries as input and provide comprehensive responses in natural language. Google claims that AI can also elucidate the reasoning behind its answers.

Med-PaLM 2’s accuracy was evaluated by having it answer a succession of questions like the United States Medical Licensing Examination. According to Google, the AI scored 85%, 18% higher than a prior-generation neural network dubbed Med-PaLM. According to the company, the efficacy of Med-PaLM 2 “far surpasses” comparable AI models from other companies.

In the coming weeks, Google’s cloud division intends to make Med-PaLM 2 available to a limited number of customers. According to the search behemoth, the objective is to determine how the model could be implemented in the medical field.

Aashima Gupta and Amy Waldron, Google Cloud executives, stated that Google hopes to “understand how Med-PaLM 2 might be used to facilitate rich, informative discussions, answer complex medical questions, and find insights in complicated and unstructured medical texts. They might also explore its utility to help draft short- and long-form responses and summarize documentation and insights from internal data sets and bodies of scientific knowledge.”

Med-PaLM 2 is one of various AI models developed by Google to assist medical professionals in their work. It collaborates with numerous healthcare organizations to improve its research in this field. In addition to announcing Med-PaLM 2, the company also announced four new healthcare partnerships.

The first collaboration is with an “AI-based organization” directed by the non-profit organization Right to Care. The focus of the collaboration is to make AI-powered tuberculosis screenings broadly accessible in Sub-Saharan Africa. Google reports that its partners have pledged to donate 100,000 complimentary screenings.

The three additional healthcare AI partnerships are with Kenyan non-profit Jacaranda Health, Chang Gung Memorial Hospital of Taiwan, and Mayo Clinic. The previous two collaborations focus on interpreting ultrasound images using machine learning. The partnership between Google and the Mayo Clinic seeks to develop an artificial intelligence (AI) model that can help physicians plan radiotherapy treatments faster.

The New Claims Acceleration Suite

In addition to its new partnerships and Med-PaLM 2 model, Google Cloud announced the Claims Acceleration Suite. It reduces administrative labor for healthcare organizations via the use of AI. The offering utilizes multiple existing Google Cloud services, including the Document AI API for document information extraction.

The Claims Acceleration Suite is intended to accelerate two frequent healthcare administration tasks. The first is claims processing, while the second is a prior authorization for health insurance. At launch, the offering only supports the second use case.

Health insurance prior authorization evaluates the medical necessity of a treatment plan. Examining medical records and other patient information that is frequently dispersed across multiple documents is required for the evaluation. According to Google, preparing this data for processing requires substantial manual labor.

The Claims Acceleration Suite is intended to accelerate the activity. It can transform medical data in unstructured files, such as PDFs, into a structured format more amenable to processing. In addition, the offering provides a search tool for medical professionals to peruse the collected data.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/google-launches-cloud-based-claims-acceleration-suite-and-a-medical-ai-model/feed/ 0
Use of AI in Cybersecurity in 2023 https://evaluatesolutions38.com/insights/security/use-of-ai-in-cybersecurity-in-2023/ https://evaluatesolutions38.com/insights/security/use-of-ai-in-cybersecurity-in-2023/#respond Fri, 14 Apr 2023 16:02:26 +0000 https://evaluatesolutions38.com/?p=52043 Highlights:

  • Research revealed that businesses that use AI as a part of strategy are emphasizing a broader view of their digital landscapes.
  • The rapid growth and adoption of AI in cybersecurity market is due to the growing contextual integration of IOAs.

Machine Learning (ML) and Artificial Intelligence (AI) are becoming the preferred choice of scammers. These tools are increasingly used for various stealth purposes, such as generating personalized phishing mails and creating malicious systems to breach the protection. The most recent multiyear breach featured certain instances of AI-powered cyberattacks.

Use of AI to Skip Detection

Advanced Persistent Threat (APT) groups and cybercriminals involve ML and AI experts to create malware to escape threat detection systems. Businesses are recommended to be vigilant always because scammers might remotely hover over the organization for several months to plan an attack and disable the systems.

Another concerning factor is disclosing new susceptibilities and the rate at which these cyber hazards can harness ML and AI for stealth operations.

Hackers and scammers use AI tools to re-configure malware, customize phishing links, and restructure algorithms to breach systems and access credentials.

Experts have observed that hackers are advancing at handling AI tools such as ChatGPT for unethical purposes. Cyber pros, on the other hand, are also engaged in exploring the optimum utility of AI for security purposes. Let’s wait for time to tell who’s going to be effective.

A recent survey revealed that a considerable number of IT policymakers predict a feasible cyberattack within a year under ChatGPT’s credit.

Developer’s AI Race

Multiple cybersecurity vendors such as CrowdStrike, Google, AWS, IBM, Palo Alto Networks, Microsoft, and others are spending on the R and amp;D of ML and AI to stay vigilant against cyber threats in response to the new features required by enterprises.

In ML, it is necessary to keep the system constantly working without interruption. Besides, the data, model training, and other relevant stuff must be prioritized. Reportedly, Microsoft has immense technology in the AI space.

Certain prominent companies’ DevOps and engineering expertise have effectively transformed R and amp;D efforts into new AI products. For instance, the zero-trust development by Microsoft Azure and many cybersecurity services by AWS proved that these cloud providers have been prioritizing R and amp;D expenditure on ML and AI.

Core Areas of Enhancing Cybersecurity Using AI in the Future

APT groups and cybercriminals increasingly use AI hacker tools to create a threat, making organizations’ security teams lose in the AI race. Such troublesome factors lead to some crucial forecasts about AI and allied investments, as follows:

1) Behavioral analytics can spot and restrict malicious activities

The zero-trust frameworks assist in real-time monitoring and visibility over a network. AI-powered behavioral analytics offers real-time insights about malicious tasks by recognizing discrepancies and acting on them. It helps IT teams to distinguish between the existing and previous behavior patterns and accordingly discard the inconsistencies. Various parameters, such as log-in attempts, configuration, and device type, are evaluated to spot glitches and real-time threats. Broadcom, CyberArk, Blackberry Persona, and Ivanti are among the leading service providers.

A behavioral analytics approach to AI-powered systems’ management prevents the app from cloning and device, protects against user impersonation, and lowers the theft risk. With behavioral analysis techniques, companies can assess endpoint detection and response (EDR), endpoint protection platform (EPP), transaction frauds, and unified endpoint management (UEM).

2) Asset management and endpoint discovery:

Research revealed that businesses that use AI as a strategy emphasize a broader view of their digital landscapes. According to IBM, almost 35% of enterprises deploy automation and AI to explore endpoints and enhance asset management.

The second most well-known use case, patch management and vulnerability are estimated to increase adoption in the coming years. As per research, the large-scale adoption of AI will help enterprises achieve zero-trust initiatives.

3) Use of AI for vulnerability and patch management:

It has been observed that a large number of security and IT personnel found patching complex and time-consuming. Moreover, several other organizations opined that coordinating crucial vulnerabilities consumes most of the time.

Sometimes, even well-equipped and adequately funded IT teams find challenges in patching. Businesses should deploy a risk-managing patch management solution and use automation to recognize and address susceptibilities without additional manual efforts.

4) Threat detection using AI:

Transaction fraud detection is the common use case that delivers high business value. Besides, file-based malware detection, process behavioral analysis, and abnormal system behavior detection also come with better feasibility and increased business value.

Organizations can deploy these solutions to spot and discard potential system threats.

5) Significance of AI-based indicators of attacks (IOAs):

AI’s rapid growth and adoption in the cybersecurity market are due to the growing contextual integration of IOAs. An IOA detects and evaluates the intent of attackers, irrespective of the malware or hacking tool used for the attack. It must be regulated to provide real-time and accurate data about breaches or attacks to apprehend the scammer’s intent and prevent possible intrusion.

IOAs strengthen existing defenses with the cloud-based ML and real-time threat intelligence to assess runtime events and generate IOAs to the sensor that links AI-based IOAs with local files to check maliciousness.

Bottom line

Threat detection has been dominating AI use cases. AI is found to deliver its optimum potential when integrated into a zero-trust security framework that treats all identities as a security perimeter.

A distinct idea of what the technology and solution protect leads to the utmost reliable use cases of ML and AI in cybersecurity. AI and ML-backed technologies effectively secure the use cases, be it an access credential, device, container, or client’s system. Chief Information Security Officers (CISOs) and leading organizations are becoming cyber-resilient by adopting AI-based security strategies. Besides, the C-suite in most organizations anticipates that cyber security management must be assessed financially, for which AI-based assistance comes into the picture.

]]>
https://evaluatesolutions38.com/insights/security/use-of-ai-in-cybersecurity-in-2023/feed/ 0
Cohere Collaborates with LivePerson to Extend Enterprise LLM Efforts https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/#respond Thu, 13 Apr 2023 14:54:00 +0000 https://evaluatesolutions38.com/?p=52002 Highlights:

  • LivePerson is a pioneer in conversational AI and chatbots. The agreement will see the two businesses combine their solutions to address issues such as AI bias and hallucinations – cases in which AI fabricates a response.
  • Cohere, situated in Toronto, stands out among AI startups because of its tight relationships with Alphabet Inc., the parent company of Google LLC. Aidan Gomez, the company’s CEO, previously worked as a researcher at Google Brain and is one of the co-authors of a seminal 2017 academic paper that first detailed the concept of a transformer model.

Cohere Inc., a generative artificial intelligence startup seen as a primary challenger to ChatGPT inventor OpenAI LP, recently collaborated with LivePerson Inc. to enhance its large language models which will be valuable for enterprise use cases.

LivePerson is a pioneer in conversational AI and chatbots. The agreement will see the two businesses combine their solutions to address issues such as AI bias and hallucinations – cases in which AI fabricates a response. According to the organizations, the effort has the potential to have a significant impact, leading to more reliable and responsible AI that is safe to use in enterprise environments.

Cohere, situated in Toronto, is noticeable among other AI startups because of its close relationship with the parent company of Google LLC, Alphabet Inc. The company’s CEO, Aidan Gomez, has worked as a researcher at Google Brain in the past and is one of the co-authors of a landmark 2017 academic paper that first detailed the concept of a transformer model. Transformers are a neural network, which are now the foundation for several important AI application cases.

Under the terms of the agreement, LivePerson wants to modify Cohere’s LLMs so that organizations can use them safely to automate more business processes. LivePerson is seeking to evolve its conversational chatbots to determine the “next steps” to take in every discussion. Its models are already built utilizing several third-party LLMs. For example, the bot may respond to a user’s inquiry with further text or recommendations or conduct other actions, such as completing a payment.

If a conversational AI assistant is to act on its own initiative, it must be more dependable than present implementations of the technology. Knowing this, LivePerson stated that it would collaborate with Cohere to fine-tune its models so that all statements made are truthful and supported by facts.

Gomez highlighted to a famous media house that Cohere manages AI adaptations by combining reinforcement learning and supervised learning to stress what’s known as “AI explainability,” or the model’s ability to always explain its results.

Cohere uses “retrieval augmented generation” to accomplish this, which essentially includes requesting the model to cite its sources each time it remarks. As a result, according to Gomez, anytime the model answers a query, it will refer to the corpus of information on which it was trained. By allowing people to verify replies, this AI explainability aims to eliminate hallucinations, especially dangerous when models are used in enterprise applications.

According to Constellation Research Inc. analyst Holger Mueller, partnerships like the one between Cohere and LivePerson are likely to become considerably more widespread. He added, “Large language model vendors are looking for fresh outlets, especially other companies that provide viable use cases for their technology. There is a need for differentiation with such partnerships and Cohere is approaching this with a higher level of AI explainability. If it works, a lot of enterprises will be seriously interested.”

LivePerson will test and develop Cohere’s LLMs internally within its systems before extending the integration to its customers’ deployments. Ultimately, businesses seek to help enterprises install the most advanced and accurate LLMs to boost consumer engagement and commercial outcomes.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/feed/ 0
OpenAI and Bugcrowd Partner to Offer Cybersecurity Bug Reward Program https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/ https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/#respond Thu, 13 Apr 2023 14:15:27 +0000 https://evaluatesolutions38.com/?p=51993 Highlights:

  • The program’s “rules of engagement” enable OpenAI identify malicious attacks from good-faith hackers. These include following policy rules, exposing vulnerabilities, and not violating users’ privacy, interfering with systems, wiping data, or negatively harming user experience.

OpenAI LP, the creator of ChatGPT, has partnered with crowdsourced cybersecurity firm Bugcrowd Inc. to launch a bug bounty program to identify cybersecurity threats in its artificial intelligence models.

Security researchers that report vulnerabilities, defects, or security issues they find in OpenAI’s systems can receive incentives ranging from USD 200 to USD 20,000. The prize payout increases with the severity of a found bug.

Nevertheless, the bug bounty program does not cover model problems or non-cybersecurity concerns with the OpenAI API or ChatGPT. Bugcrowd noted in a blog post, “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach.”

Researchers participating in the program must also adhere to “rules of engagement” that will help OpenAI distinguish between malicious attacks and hacks conducted in good faith. They include abiding by the policy guidelines, disclosing vulnerabilities found, and not compromising users’ privacy, interfering with systems, erasing data, or negatively impacting their user experience.

Any vulnerabilities uncovered must likewise be kept private until they are approved for dissemination by OpenAI’s security team. The company’s security staff intends to issue authorization within 90 days of receiving a report.

Seems like stating the obvious, but security researchers are encouraged not to use extortion, threats, or other pressure techniques to induce a response. If any of these events occur, OpenAI will refuse safe harbor for any vulnerability revealed.

The revelation of the OpenAI bug bounty program has received a good response from the cybersecurity community.

Melissa Bischoping, Director of endpoint security research at Tanium Inc., told a lead media house, “While certain categories of bugs may be out-of-scope in the bug bounty, that doesn’t mean the organization isn’t prioritizing internal research and security initiatives around those categories. Often, scope limitations are to help ensure the organization can triage and follow up on all bugs, and scope may be adjusted over time. Issues with ChatGPT writing malicious code or other harm or safety concerns, while definitely a risk, are not the type of issue that often qualifies as a specific ‘bug,’ and are more of an issue with the training model itself.”

]]>
https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/feed/ 0