OpenAI – EvaluateSolutions38 https://evaluatesolutions38.com Latest B2B Whitepapers | Technology Trends | Latest News & Insights Thu, 04 May 2023 17:59:39 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.6 https://dsffc7vzr3ff8.cloudfront.net/wp-content/uploads/2021/11/10234456/fevicon.png OpenAI – EvaluateSolutions38 https://evaluatesolutions38.com 32 32 Atlassian Integrates Generative AI into Confluence and Jira https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/#respond Mon, 24 Apr 2023 15:03:19 +0000 https://evaluatesolutions38.com/?p=52195 Highlights:

  • Atlassian Corporation Plc became the most recent technology company to incorporate generative artificial intelligence capabilities into its flagship collaborative software offerings.
  • The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove useful to developers.

Atlassian Corporation Plc is the latest technology business to include generative artificial intelligence capabilities in its core collaborative software solutions.

The new technology, Atlassian Intelligence, is partially based on the company’s models acquired through the January 2022 acquisition of Percept.AI. It also utilizes OpenAI LP’s GPT-4 model, notable for powering the ChatGPT chatbot, whose release late last year ignited a virtual AI arms race among major tech companies.

Dozens of software companies have attempted to capitalize on the hype surrounding generative AI, which enables machines to interact with humans and respond in a nearly realistic manner by answering queries, locating information, conducting tasks, and more.

Atlassian Intelligence was developed utilizing large language models at the core of generative AI and operates by constructing “teamwork graphs” that depict the various types of work being performed by teams and their relationships. Atlassian says its open platforms let it incorporate context from third-party apps.

GPT-4, which has been trained on vast amounts of publicly available online text, will be able to assist teams in multiple ways, according to Atlassian, including accelerating work, providing immediate assistance, and fostering a shared comprehension of projects.

In the Confluence collaboration platform, employees can click on any unfamiliar term within a document to receive an explanation and links to other relevant documents. Additionally, users can write queries into a chat area and receive automated responses based on the content of documents uploaded to Confluence. Tell it to generate a summary of a recent meeting and include a link to the transcript, and it will immediately spew out a list of agreed-upon decisions and action items.

In addition, Atlassian Intelligence can compose social media posts regarding a forthcoming product announcement based on the product’s Confluence specifications. Meanwhile, software developers utilizing Jira can swiftly compose a test plan based on the system’s operating system knowledge.

Users of Jira may also utilize a virtual agent that automates support via Slack and Teams. The agent would be able to retrieve information from existing knowledge base articles to assist both agents and end users and summarize previous interactions for newly assigned support agents so they are promptly brought up to speed on an issue.

The ability of Atlassian Intelligence to translate natural language queries into Atlassian’s Jira Query Language is yet another interesting feature that should prove helpful to developers.

Holger Mueller of Constellation Research Inc. said, “AI is coming to software development — that is unavoidable, and Atlassian doesn’t want to be left behind. It’s a good move, infusing ChatGPT capabilities into Confluence and Jira, because anything that increases the velocity of software developers will be welcomed by enterprises. What will be interesting to see is which of the new features become the most popular and useful.”

Atlassian stated that customers must sign up for a waiting list to access the new features currently available in beta testing mode for its cloud-based products. Some of the new features, such as the virtual agent for Jira Service Management, will be included at no additional cost in Atlassian’s Premium and Enterprise programs.

According to Atlassian, new users who sign up for the beta can anticipate seeing them in the coming months.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/atlassian-integrates-generative-ai-into-confluence-and-jira/feed/ 0
Deloitte Creates a New Practice to Assist Businesses in Deploying Generative AI https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/#respond Fri, 14 Apr 2023 18:56:07 +0000 https://evaluatesolutions38.com/?p=52068 Highlights:

  • Massive advances in AI technology are being driven by the emergence of accelerated computing, which is forcing businesses across industries to rethink their products and business models fundamentally.
  • According to Deloitte, generative AI has opened up a wide range of new market applications and can significantly increase business productivity.

Deloitte Touche Tohmatsu Ltd., an information technology consulting company, announced recently that it is launching a new practice to assist businesses in leveraging generative artificial intelligence, the hottest trend in the sector.

Because it can create new content based on brief text descriptions, this trend has recently taken over the news. Generative AI is powered by a new generation of chatbots and virtual assistants, most notably OpenAI LP’s ChatGPT, which can converse, create artwork, write code, and more.

Businesses are eager to adopt the generative AI trend and learn how the technology can improve their operational efficiency and financial performance. Still, with so much discussion surrounding the subject, figuring out where to begin can take time. Deloitte believes it can assist in this area by giving enterprise leaders the deep AI industry experience they require to develop their generative AI strategies.

According to Deloitte, generative AI has opened up a slew of new industry applications and has the potential to greatly boost company efficiency. While developing, implementing, and operationalizing new applications based on fundamental AI models, many people require assistance.

The new practice at Deloitte will be dedicated to assisting businesses in implementing both custom-built solutions and those provided by third parties. The Generative AI Market Incubator, a group of engineers devoted to fast designing and launching generative AI pilot applications, is one of its major components.

In order to train and improve foundation models, Deloitte also established a research and development team that will collaborate with its alliance partners, the company said. Due to Deloitte’s early adoption of generative AI and its acquisitions of startups like HashedIn Technologies and Intellify Inc., both teams are said to have extensive experience in AI and the cloud.

Additionally, the new practice will collaborate with the Deloitte AI Academy. Its purpose is to train thousands of people in various new AI skills, including model development and prompt engineering, and to close the talent gap in AI.

Deloitte cited its most recent AI Dossier report, which outlines some situations in which generative AI can be used to good effect almost immediately. They include, among other things, supply chain optimization, fraud detection, and smart factories. With the help of its new practice, clients will be helped as they deploy applications in these fields and as they navigate ethical, legal, and policy issues.

Holger Mueller, an analyst with Constellation Research Inc., argued that Deloitte made a wise decision by taking this action because businesses will require assistance in implementing and maintaining the most recent AI tools.

Mueller added, “Of course, that help is going to come from the traditional system integrators like Deloitte, which is starting with an incubator. It is land grab time for the AI services category as many things are in flux.”

The cliche that “generative AI is transforming the way we work” was echoed by Jason Girzadas, managing principal of businesses, global, and strategic services at Deloitte U.S. and the firm’s incoming Chief Executive. Deloitte is prepared to assist its clients as they “develop and deploy new and innovative AI-fueled solutions,” he said, as businesses look to adopt the trend.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/deloitte-creates-a-new-practice-to-assist-businesses-in-deploying-generative-ai/feed/ 0
Amazon Joins the Generative AI Race with Bedrock https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/#respond Fri, 14 Apr 2023 18:51:01 +0000 https://evaluatesolutions38.com/?p=52065 Highlights:

  • Developers can save a lot of time and money by using pre-trained foundation models instead of having to start from scratch when training a language model.
  • The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization.

Amazon Web Services Inc. has recently expanded its reach into artificial intelligence software development by releasing several new tools for generative AI training and deployment on its cloud platform.

The business described new services in a post on the AWS Machine Learning blog, including the capacity to build and train foundation models, which are extensive, pre-trained language models that lay the groundwork for particular natural language processing tasks.

Deep learning techniques are generally used to train foundation models on enormous volumes of text data, enabling them to become adept at understanding the subtleties of human language and produce content nearly indistinguishable from that written by humans.

When training a language model, developers can save time and money using pre-trained foundation models instead of starting from scratch. A foundation model for text generation, sentiment analysis, and language translation is the Generative Pre-trained Transformer (GPT) from OpenAI LLC.

LLM Choices

Bedrock’s brand-new service makes foundation models from various sources accessible through an API. The Jurassic-2 multilingual large language models from AI21 Labs Ltd., which produce text in Spanish, French, German, Portuguese, Italian, and Dutch, and Anthropic’s PBC’s Claude LLM, a conversational and text processing system that follows moral AI system training principles are included. Users can use the API to access Stability AI Ltd. and Amazon LLMs.

According to Swami Sivasubramanian, Vice President of database, analytics, and machine learning at AWS, foundation models are pre-trained at the internet scale. They can therefore be customized with comparatively little additional training. He used the example of a fashion retailer’s content marketing manager, who could give Bedrock as few as 20 examples of effective taglines from past campaign examples with relevant product descriptions. Bedrock will then automatically generate effective social media posts, display ad images, and web copy for the new products.

In addition to the Bedrock announcement, AWS is releasing two new Titan large language models. The first is a generative LLM for information extraction, open-ended question and answer, classification, text generation, and summarization. The second LLM converts text prompts into numerical representations, including the meaning of the text and helps build contextual responses beyond paraphrasing.

No mention of OpenAI, in which Microsoft Corp. is a significant investor, was made in the announcement. Still, given the market’s demand for substantial language models, this shouldn’t be a problem for Amazon.

Although AWS is behind Microsoft and Google LLC in bringing its LLM to market, Kandaswamy argued that this shouldn’t be considered a competitive disadvantage. He said, “I don’t think anyone is so behind that they have to play catchup. It might appear that there is a big race, but the customers we speak with, other than very early adopters, have no idea what to do with it.”

Hardware Boost

Additionally, AWS is upgrading its hardware to provide training and inference on its cloud. New, network-optimized EC2 Trn1n instances now offer 1,600 gigabits per second of network bandwidth, or about a 20% performance increase, and feature the company’s exclusive Trainium and Inferentia2 processors. Additionally, the business’s Inf2 instances, which use Inferentia2 for inferencing of massively multi-parameter generative AI applications, are now generally available.

CodeWhisperer, an AI coding companion that uses a foundation model to produce code suggestions in real-time based on previous code and natural language comments in an integrated development environment, is another product whose availability has been announced. The tool is accessible from some IDEs and supports Python, Java, JavaScript, TypeScript, C#, and ten other languages.

Sivasubramanian wrote, “Developers can simply tell CodeWhisperer to do a task, such as ‘parse a CSV string of songs’ and ask it to return a structured list based on values such as artist, title and highest chart rank.” CodeWhisperer produces “an entire function that parses the string and returns the list as specified.” He said that developers who used the preview version reported improvement of 57% in speed with a 27% higher success rate.

As many players attempt to capitalize on the success of proofs of a concept like ChatGPT, the LLM landscape will likely remain dispersed and chaotic for the foreseeable future. As Google’s Natural Language API has in speech recognition, it’s unlikely that any one model will come to dominate the market, according to Kandaswamy.

He said, “Just because a model is good at one thing doesn’t mean it’s going to be good with everything. It’s possible over two or three years everybody will offer everybody else’s model. There will be more blending and cross-technology relationships.”

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/amazon-joins-the-generative-ai-race-with-bedrock/feed/ 0
Cohere Collaborates with LivePerson to Extend Enterprise LLM Efforts https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/#respond Thu, 13 Apr 2023 14:54:00 +0000 https://evaluatesolutions38.com/?p=52002 Highlights:

  • LivePerson is a pioneer in conversational AI and chatbots. The agreement will see the two businesses combine their solutions to address issues such as AI bias and hallucinations – cases in which AI fabricates a response.
  • Cohere, situated in Toronto, stands out among AI startups because of its tight relationships with Alphabet Inc., the parent company of Google LLC. Aidan Gomez, the company’s CEO, previously worked as a researcher at Google Brain and is one of the co-authors of a seminal 2017 academic paper that first detailed the concept of a transformer model.

Cohere Inc., a generative artificial intelligence startup seen as a primary challenger to ChatGPT inventor OpenAI LP, recently collaborated with LivePerson Inc. to enhance its large language models which will be valuable for enterprise use cases.

LivePerson is a pioneer in conversational AI and chatbots. The agreement will see the two businesses combine their solutions to address issues such as AI bias and hallucinations – cases in which AI fabricates a response. According to the organizations, the effort has the potential to have a significant impact, leading to more reliable and responsible AI that is safe to use in enterprise environments.

Cohere, situated in Toronto, is noticeable among other AI startups because of its close relationship with the parent company of Google LLC, Alphabet Inc. The company’s CEO, Aidan Gomez, has worked as a researcher at Google Brain in the past and is one of the co-authors of a landmark 2017 academic paper that first detailed the concept of a transformer model. Transformers are a neural network, which are now the foundation for several important AI application cases.

Under the terms of the agreement, LivePerson wants to modify Cohere’s LLMs so that organizations can use them safely to automate more business processes. LivePerson is seeking to evolve its conversational chatbots to determine the “next steps” to take in every discussion. Its models are already built utilizing several third-party LLMs. For example, the bot may respond to a user’s inquiry with further text or recommendations or conduct other actions, such as completing a payment.

If a conversational AI assistant is to act on its own initiative, it must be more dependable than present implementations of the technology. Knowing this, LivePerson stated that it would collaborate with Cohere to fine-tune its models so that all statements made are truthful and supported by facts.

Gomez highlighted to a famous media house that Cohere manages AI adaptations by combining reinforcement learning and supervised learning to stress what’s known as “AI explainability,” or the model’s ability to always explain its results.

Cohere uses “retrieval augmented generation” to accomplish this, which essentially includes requesting the model to cite its sources each time it remarks. As a result, according to Gomez, anytime the model answers a query, it will refer to the corpus of information on which it was trained. By allowing people to verify replies, this AI explainability aims to eliminate hallucinations, especially dangerous when models are used in enterprise applications.

According to Constellation Research Inc. analyst Holger Mueller, partnerships like the one between Cohere and LivePerson are likely to become considerably more widespread. He added, “Large language model vendors are looking for fresh outlets, especially other companies that provide viable use cases for their technology. There is a need for differentiation with such partnerships and Cohere is approaching this with a higher level of AI explainability. If it works, a lot of enterprises will be seriously interested.”

LivePerson will test and develop Cohere’s LLMs internally within its systems before extending the integration to its customers’ deployments. Ultimately, businesses seek to help enterprises install the most advanced and accurate LLMs to boost consumer engagement and commercial outcomes.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/cohere-collaborates-with-liveperson-to-extend-enterprise-llm-efforts/feed/ 0
OpenAI and Bugcrowd Partner to Offer Cybersecurity Bug Reward Program https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/ https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/#respond Thu, 13 Apr 2023 14:15:27 +0000 https://evaluatesolutions38.com/?p=51993 Highlights:

  • The program’s “rules of engagement” enable OpenAI identify malicious attacks from good-faith hackers. These include following policy rules, exposing vulnerabilities, and not violating users’ privacy, interfering with systems, wiping data, or negatively harming user experience.

OpenAI LP, the creator of ChatGPT, has partnered with crowdsourced cybersecurity firm Bugcrowd Inc. to launch a bug bounty program to identify cybersecurity threats in its artificial intelligence models.

Security researchers that report vulnerabilities, defects, or security issues they find in OpenAI’s systems can receive incentives ranging from USD 200 to USD 20,000. The prize payout increases with the severity of a found bug.

Nevertheless, the bug bounty program does not cover model problems or non-cybersecurity concerns with the OpenAI API or ChatGPT. Bugcrowd noted in a blog post, “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach.”

Researchers participating in the program must also adhere to “rules of engagement” that will help OpenAI distinguish between malicious attacks and hacks conducted in good faith. They include abiding by the policy guidelines, disclosing vulnerabilities found, and not compromising users’ privacy, interfering with systems, erasing data, or negatively impacting their user experience.

Any vulnerabilities uncovered must likewise be kept private until they are approved for dissemination by OpenAI’s security team. The company’s security staff intends to issue authorization within 90 days of receiving a report.

Seems like stating the obvious, but security researchers are encouraged not to use extortion, threats, or other pressure techniques to induce a response. If any of these events occur, OpenAI will refuse safe harbor for any vulnerability revealed.

The revelation of the OpenAI bug bounty program has received a good response from the cybersecurity community.

Melissa Bischoping, Director of endpoint security research at Tanium Inc., told a lead media house, “While certain categories of bugs may be out-of-scope in the bug bounty, that doesn’t mean the organization isn’t prioritizing internal research and security initiatives around those categories. Often, scope limitations are to help ensure the organization can triage and follow up on all bugs, and scope may be adjusted over time. Issues with ChatGPT writing malicious code or other harm or safety concerns, while definitely a risk, are not the type of issue that often qualifies as a specific ‘bug,’ and are more of an issue with the training model itself.”

]]>
https://evaluatesolutions38.com/news/security-news/openai-and-bugcrowd-partner-to-offer-cybersecurity-bug-reward-program/feed/ 0
Recorded Future Introduces GPT-powered Threat Analytics Model https://evaluatesolutions38.com/news/security-news/recorded-future-introduces-gpt-powered-threat-analytics-model/ https://evaluatesolutions38.com/news/security-news/recorded-future-introduces-gpt-powered-threat-analytics-model/#respond Thu, 13 Apr 2023 13:56:45 +0000 https://evaluatesolutions38.com/?p=51990 Highlights:

  • Three years ago, Insight Partners bought the company’s majority of shares. The purchase price was more than USD 780 million.
  • One of many businesses using OpenAI’s GPT family of language models to assist cybersecurity teams in their work is Recorded Future.

A cybersecurity tool that uses an OpenAI LP artificial intelligence model to identify threats was just released by Recorded Future Inc.

The software platform by Boston-based Recorded Future enables businesses to monitor hacker activity. The platform, for instance, can be used by a bank to find new malware campaigns that target the financial industry. Recorded Future says that over 50 percent of the Fortune 100 companies use its technology.

Three years ago, Insight Partners bought the majority of the company. It was worth more than USD 780 million in the agreement.

The new tool that the company unveiled recently, Recorded Future AI, is built using a neural network from OpenAI’s GPT series of large language models. The most recent neural network in the GPT series, GPT-4, debuted last month. There are also more than a dozen additional AI models in the product line with various feature sets.

Companies continuously gather information about user activity, applications, and hardware in their networks to identify breaches. In the past, cybersecurity teams manually examined that data to look for fraudulent activity. The goal of Recorded Future AI is to make the task easier.

The business claims that its new tool automatically locates breach indicators in a company’s network and ranks them according to their seriousness. It also identifies weaknesses. For instance, the tool can determine whether a server has a configuration error that enables users to log in without a password.

The Recorded Future AI promises to accelerate several additional tasks as well.

Cybersecurity teams regularly produce reports for executives as part of their work that describes how well the corporate network is protected and where improvements can be made. Analysts must manually collect technical data from various systems in order to create such a report. The process could be sped up by several days with the help of Recorded Future AI’s promise to automate some steps.

Christopher Ahlberg, Co-founder and Chief Executive Officer said, “Now, with Recorded Future AI, we believe we can eliminate the cyber skills shortage and increase the capacity for cyber readiness by immediately surfacing actionable intelligence.”

One hundred terabytes of cybersecurity data were used to train the GPT model that Recorded Future obtained from OpenAI to create the tool. The startup’s eponymous software platform was used to gather the data. The platform offers businesses data on vulnerabilities, cyberattacks, and the servers which hackers utilize to launch malware campaigns.

The tool also uses research from the Insikt Group research group of the startup. The 40,000 analyst notes on online threats that the Insikt Group has produced over the years are included, in particular. Cybersecurity teams employ these analyst notes to describe hacker strategies and disseminate associated technical data.

One of many businesses using OpenAI’s GPT family of language models to assist cybersecurity teams in their work is Recorded Future.

Microsoft Corp. unveiled Security Copilot, a service that uses the most recent GPT-4 model from OpenAI last month. During a breach attempt, the service automatically detects malicious activity and predicts the next moves a hacker is likely to make. Cybersecurity teams can use security Copilot’s data to guide their efforts to address breaches.

]]>
https://evaluatesolutions38.com/news/security-news/recorded-future-introduces-gpt-powered-threat-analytics-model/feed/ 0
Native AI Secures USD 3.5M Funding for Building AI-Powered Customer Digital Clones https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/native-ai-secures-usd-3-5m-funding-for-building-ai-powered-customer-digital-clones/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/native-ai-secures-usd-3-5m-funding-for-building-ai-powered-customer-digital-clones/#respond Mon, 10 Apr 2023 19:25:01 +0000 https://evaluatesolutions38.com/?p=51870 Highlights:

  • 11 Tribes Ventures and Connetic Ventures joined the round, which was co-led by JumpStart Ventures and Ivy Ventures.
  • Unlike most generative AI platforms trained on nonspecific public datasets, Native allows users to see the actual source(s) for specific recommendations and answers.

Market intelligence platform company, Native AI, announced recently that it has secured USD 3.5 million in seed funding.

By harnessing the capabilities of generative artificial intelligence to generate insights and deliver answers to queries, the business wants to empower marketers to create digital clones of their customers. Jumpstart Ventures and Ivy Ventures co-led the fundraising round, and 11 Tribes Ventures and Connetic Ventures also contributed.

The company’s platform is backed by a unique generative AI model that the CEO, Frank Pica, calls customers “digital twins” of target customers and consumer bases. This model leverages real-time industry, consumer, and product data. Platform users can easily interact with these digital twins by asking them questions about products, interests, preferences, and receiving insights into their behaviours in return.

Pica explained that the team’s first step was to use natural language processing, a form of AI that can decipher the spoken language, to gather unstructured, unfiltered customer feedback and improve relationships between businesses and their customers.

Pica said, “Starting with natural language processing, we very quickly realized that we could generate responses, summaries, and quick insights using that raw data. Then about a year and a half on, the vision became, ‘Could we actually clone individuals using AI and treat those just as you would a focus group or a survey?’

Users of the interface can enquire about topics like “What clothing styles are you most likely to purchase?” or “Which lipstick brand are you most likely to buy?” The AI would reply with text it had generated based on what the targeted digital twin audience would say in response.

Native’s internal AI differs from other generative AI models on the market, such as OpenAI LP’s GPT-4, in that it is trained on carefully curated first-party and third-party data selected for the task rather than being open to nonspecific public datasets. Additionally, Native lets its paying customers to view the precise sources it refers to when making recommendations and providing solutions.

According to Pica, when a user asks Native’s AI a question, they can specify the personas, customer base, or even the particular clientele of the company they want to address. They can even configure it to create an audience of digital clones based on the clientele of the rival. As a result, AI can make precise, focused, and less biased recommendations.

Additionally, Native simultaneously launches numerous AI digital twins, in contrast to other generative AI models that only function as a single entity or persona. The user consequently receives a large number of responses, which can be very helpful in terms of marketing.

Pica said, “We believe this framework is much more conducive to much of the work being done today at major consumer goods companies. For anyone who is doing market research and looking for marketing insights, they typically need multiple responses to help quantify. That’s the biggest key differentiator is the ability to define the audience or digital twin panel up front of who you want to target and get your responses back, such as that shopper at Sephora or at Amazon, or thousands of shoppers.”

In addition to offering a digital twin service, Native’s platform also enables users to track the daily performance of a product, an industry, and competitors by analyzing customer reviews from well-known retailer websites like Amazon, Walmart, and Target. Companies can track and assess brand health and sentiment using its technology to make informed marketing decisions.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/native-ai-secures-usd-3-5m-funding-for-building-ai-powered-customer-digital-clones/feed/ 0
Zilliz Cloud Fixes AI Hallucinations with Vector Database Updates https://evaluatesolutions38.com/news/it-infra-news/storage-news/zilliz-cloud-fixes-ai-hallucinations-with-vector-database-updates/ https://evaluatesolutions38.com/news/it-infra-news/storage-news/zilliz-cloud-fixes-ai-hallucinations-with-vector-database-updates/#respond Thu, 06 Apr 2023 13:25:17 +0000 https://evaluatesolutions38.com/?p=51784 Highlights:

  • Zilliz Cloud is utilized to drive AI models for product recommendation engines, semantic text search, targeted advertising, risk management, and fraud protection.
  • Zilliz Cloud can offer the foundation for a ChatGPT/VectorDB/Prompts-as-Code technological stack that, according to the business, enables LLMs to vastly scale out their expertise by gaining access to multiple other data sources.

Zilliz Inc., a startup specializing in vector databases, has announced that its newly upgraded Zilliz Cloud solution is what artificial intelligence practitioners need to prevent hallucinations.

Large language models, such as ChatGPT, have captivated the public’s curiosity because of their amazing ability to generate human-like replies to virtually any query. Nevertheless, Zilliz observes that these models are far from ideal, with one of their significant flaws being that they frequently generate answers in the absence of accurate data. The AI industry refers to this phenomenon as “hallucination,” It may be highly dangerous in some scenarios, such as when AI is employed to answer customer service concerns.

Zilliz thinks it can avoid these hallucinations. It quotes OpenAI LP, the developer of ChatGPT, noting that these falsified replies may be reduced by providing LLMs with external sources of domain-specific data, which Zilliz Cloud can assist in.

Zilliz Cloud is a vector database supporting AI applications built on the open-source Milvus project. These models often translate unstructured data such as texts, videos, and user actions into vectors, which are complicated numerical sequences. So, it is frequently necessary to determine which vectors are closest to or most comparable to others.

A specialized vector database is crucial while sorting and ranking a lot of vectors. Conventional databases are intended to store tables and documents, making them inefficient for machine learning. Zilliz Cloud is unique because it can dynamically alter and index millions of vectors to respond to the usual queries posed to AI models.

Zilliz Cloud drives AI models for product recommendation engines, targeted advertising, semantic text search, risk management, and fraud protection. Nonetheless, its vector-native architecture makes it perfect for LLMs as well.

By utilizing OpenAI plugins to connect to ChatGPT, Zilliz Cloud can offer the foundation for a ChatGPT/VectorDB/Prompts-as-Code technological stack that, according to the business, enables LLMs to vastly scale out their expertise by gaining access to multiple other data sources. With greater understanding, hallucinations should become far less frequent.

Charles Xie, Chief Executive, said that if AI is supposed to fulfill its potential, it needs to become more trustworthy. He further said, “Hallucinations or wrong answers erode that trust. With the billion-vector performance of Zilliz, we can help address that by expanding context and data retrieval.”

]]>
https://evaluatesolutions38.com/news/it-infra-news/storage-news/zilliz-cloud-fixes-ai-hallucinations-with-vector-database-updates/feed/ 0
Glean Unveils First Enterprise-Grade Generative AI Search Functionalities https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/glean-unveils-first-enterprise-grade-generative-ai-search-functionalities/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/glean-unveils-first-enterprise-grade-generative-ai-search-functionalities/#respond Wed, 05 Apr 2023 21:03:53 +0000 https://evaluatesolutions38.com/?p=51767 Highlights:

  • According to Glean, its generative AI capabilities will work with various LLMs and be made available through extensions for Chrome-based browsers.
  • To identify internal experts in inquiry responses, Glean said it is also strengthening its engine’s understanding of how a company’s content, personnel, and activity link to one another.

Recently, generative artificial intelligence and other AI-based technologies were added to the core service of Glean Technologies Inc. This company creates a search engine that businesses can use to index their own content.

The business claims it uses generative machine learning models to comprehend and synthesize content to give more accurate answers to natural language queries that consider content, context, and permissions from across the organization. ChatGPT by OpenAI LP is a well-known example of generative AI.

In addition to enhancing its engine’s comprehension of how a company’s content, employees, and activity relate to one another, Glean claimed it is doing this to identify internal experts in query responses. With clickable recommendations for related or pertinent content from across the company that appears in a companion window, new in-context offers add supplementary content and context to information.

Enterprise GPT Difficulties

People have been motivated by ChatGPT to consider how the technology might be used for enterprise data. But, Glean’s Founder and Chief Executive, Arvind Jain, said, “The large language models in the marketplace aren’t sufficient to unlock the full value. It isn’t easy to train GPT-4 [the latest version of the generative pre-trained transformer model] on your knowledge base.”

Access control and reliability are two issues. Jain said, “Models can hallucinate. You need to ground them so they’re using the right knowledge and asking the right questions. Those are some of the core challenges.”

According to the company, generative AI must comprehend context, interpersonal relationships, a company’s internal language, privacy and security parameters, and content when used inside the firewall. Glean retrains deep learning language models using a company’s knowledge base to create an organizational taxonomy and comprehend the subtle nuances of human communication. It also shows which sources are used to produce results and considers governance guidelines and permissions.

When specific terms are not used, the software still uses semantic search principles to return results related to the query. According to Eddie Zhou, a founding engineer at Glean, a question about personally identifiable information might yield a result about log data. Zhou said, “It has read everything, knows what everyone does, and never forgets anything.”

Multiple LLMs will be supported, and Glean’s generative AI capabilities will be made available via extensions for Chrome-based browsers. However, desktop systems do not currently support the feature.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/glean-unveils-first-enterprise-grade-generative-ai-search-functionalities/feed/ 0
Stanford HAI Publishes its Most Recent AI Index Report on the State of Artificial Intelligence https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stanford-hai-publishes-its-most-recent-ai-index-report-on-the-state-of-artificial-intelligence/ https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stanford-hai-publishes-its-most-recent-ai-index-report-on-the-state-of-artificial-intelligence/#respond Tue, 04 Apr 2023 19:46:43 +0000 https://evaluatesolutions38.com/?p=51736 Highlights:

  • The institute, also known as Stanford HAI, debuted at the beginning of the year. The impact of technology on society is also researched, along with new AI techniques. It makes available a report on the AI Index every year.
  • PaLM, another Google model introduced last year, has a development cost of USD 8 million, according to Stanford University.

The most recent edition of the AI Index Report, which explores machine learning advancements over the previous year, was published recently by the Stanford Institute for Human-Centered Artificial Intelligence.

The institute, often known as Stanford HAI, officially debuted in the first quarter of 2019. It investigates novel AI techniques and the social effects of the technology. Each year, it publishes its AI Index Report.

The most recent version of the research, released recently, has more than 350 pages. It covers a wide range of subjects, such as the price of AI training, initiatives to lessen bias in language models, and the influence of technology on public policy. The report highlights several significant achievements over the past year in each area it surveys.

AI’s Advancements and Difficulties

Over the past year, the most cutting-edge neural networks have grown more complex. Stanford HAI cites the Minerva large language model from Google LLC as an illustration. The 540 billion parameter model debuted last June required nine times as much computing power to train as OpenAI LP’s GPT-3.

The rising cost of machine learning projects is a direct result of the expanding hardware requirements of AI software. PaLM, a different Google model launched last year, is estimated to have cost USD 8 million to build by Stanford HAI. That is 160 times more than GPT-2, which OpenAI launched in 2019 and served as GPT-3’s forerunner.

Although AI models are capable of much more than they were a few years ago, they nevertheless have limitations. These restrictions apply to numerous areas.

In a recent release, Stanford HAI highlighted a 2022 study that revealed some reasoning tasks are complex for advanced language models. Planning-intensive tasks are frequently among the most difficult for neural networks. Researchers found numerous AI bias instances in large language models and neural networks designed for image synthesis last year.

In 2022, efforts by researchers to remedy those problems came to light. In its paper released recently, Stanford HAI emphasized how a novel model-training methodology dubbed “instruction tuning” has demonstrated promise to reduce AI bias. Instruction training, introduced by Google in late 2021, entails rephrasing AI prompts to make them simpler for a neural network to understand.

New Use Cases

Researchers not only improved AI models last year; they also discovered new uses for the technology. Some of such applications resulted in scientific breakthroughs.

Google’s DeepMind machine learning division unveiled AlphaTensor, a brand-new AI system, in October 2022. The technique was developed by DeepMind researchers to perform matrix multiplications more quickly. Machine learning models frequently employ a mathematical operation called matrix multiplication to convert input into decisions.

According to Stanford HAI, last year saw scientists use AI to support research in various other fields. One endeavor showed how AI might be used to find newer antibodies. A neural network that can regulate the plasma in a nuclear fusion reactor was developed due to another research, which Google’s DeepMind also directed.

The Effects of AI on Society

The latest report from Stanford HAI devotes several chapters to discussing how AI affects society. Whereas large language models have only just come to the general public’s attention, AI is already impacting some fields.

About 2% of U.S. lawmakers’ proposed federal AI-related legislation was enacted in 2021. That percentage increased to 10% last year. Meanwhile, 35% of all AI-related state legislation was approved in 2022.

The education industry is also experiencing the effects of machine learning. As of 2021, 11 nations will have adopted and implemented a K–12 AI curriculum, according to Stanford HAI’s research. Between 2010 and 2021, the proportion of new computer science Ph.D. graduates from American colleges with an AI focus nearly quadrupled to 19.1%.

]]>
https://evaluatesolutions38.com/news/tech-news/artificial-intelligence-news/stanford-hai-publishes-its-most-recent-ai-index-report-on-the-state-of-artificial-intelligence/feed/ 0