Building Reliable AI by Applying Search Engine Lessons
Business executives can combine innovative generative AI techniques with the successful element of search to address the trust issue.
Businesses stand to gain significantly from generative AI in the form of quantifiable increases in production, efficiency, and customer service.
However, one possible disadvantage that must be overlooked is trust. Building generative AI systems that provide accurate answers while avoiding “hallucination” or producing other false or inaccurate responses has been a challenge for business leaders. It is helpful to examine how we address issues with search, an earlier transformational technology, in order to address this challenge.
Because of their successes and shortcomings, search engines may teach us a lot about creating reliable generative AI applications. Although the degree of accuracy varies, generative AI is being used by businesses more and more in their daily operations. A rough degree of accuracy is more than sufficient, for instance, if a business is using AI to create a software that selects which advertisements to show on a website.
But there’s no room for error if AI powers a chatbot that responds to important financial queries, like how much an invoice is worth or how many paid days an employee has this month.
Search engines have spent decades trying to go through massive amounts of web data and provide precise results.
This has taught us valuable insights about how to expose the right info. Business executives may unleash the potential of generative AI in the workplace and address the “trust” issue by fusing the effective elements of search with novel ways for generative AI in the workplace.
Sorting through Gold
Sorting through massive amounts of data and locating the best sources is one task that search engines excel at. For instance, search engines return the websites that are most likely to be reliable based on the quantity and caliber of links pointing to a given page. Additionally, search engines favor websites that are widely regarded as reliable, including official government portals or established news sources.
In business, generative AI apps can emulate these ranking techniques to return reliable results. They should favour the sources of company data that have been most frequently accessed, searched or shared.
And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources.
Identifying the Truth
Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information. This means that they’re able to address questions on a wide variety of topics but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers.
One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses but they should not be used as a canonical source of knowledge.
To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable.
Knowing Your Limits
Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer.
This is unacceptable for many business use cases and so generative AI applications need a layer between the search or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not.
If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user and can provide them with an option to provide additional context so that the generative AI app can produce a confident result.
This is unacceptable for many business use cases and so generative AI applications need a layer between the search or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not.
If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user and can provide them with an option to provide additional context so that the generative AI app can produce a confident result.
Beyond the Black Box
Another area where search engines fall short is explainability, which generative AI apps need to use to gain users’ trust. Generative AI applications need to follow the same guidelines as secondary school professors, who instruct their students to present their work and credit references. By revealing the information’s origins, users are able to determine the reliability of the material and its source. This transparency, which is now being offered by a few public LLMs, ought to be a basic component of generative AI-powered business tools.
Appropriate Skepticism
Without a doubt, developing AI applications with low error rates remains difficult. However, company leaders cannot afford to dismiss the idea since the benefits are so evident and quantifiable. The important thing is to approach AI tools with an open mind. Just as the internet has taught us to be skeptical of news and information sources, business executives should also cultivate a healthy degree of skepticism regarding reliable AI. The key to achieving this is to constantly demand openness and explainability from AI applications, while also being aware of the constant possibility of bias.
These kinds of applications have the potential to completely change the nature of employment. They must be designed with accuracy as their top priority and built to be dependable and trustworthy in order to live up to this promise. Although search engine technology is developed with a variety of use cases in mind, it can teach us a lot about how to extract relevant results from overwhelming amounts of data. Business executives can create generative AI programs with great promise by using this knowledge and incorporating fresh methods to increase the accuracy.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.