My page - topic 1, topic 2, topic 3 Postbox Live

5 Ways To Teach Responsible AI Chatbot

5 Ways To Teach Responsible Ai Chatbot

Five Ways To Teach An Intern-Style Responsible AI Chatbot

 

 

It’s been said that generative artificial intelligence is the sharpest intern in the world someone with minimal life experience but occasional bursts of astounding skill. Few have been left unmoved by the latest darling of the IT sector at its best.

Consider the lawyer who was lured in by AI’s apparent eloquence and then found himself subject to fines for presenting documents that the program had falsified. Or take the case against Air Canada and the damage to its reputation that resulted from its AI-driven customer support chatbot going off course and telling a falsehood about being able to compensate a customer for a bereavement fare in the past.

The fast emergence of consumer AI products and these cautionary tales did not help customers’ trust in conversational AI chatbots. It is alarming to learn that, according to a 2023 Gallup/Bentley University survey, only 21% of consumers believe that organizations would manage artificial intelligence (AI) properly.

Thus, the question emerges: What would prevent us from using AI chatbots to mentor our interns to become responsible professionals?

Here are five viewpoints that can help.

  1. Teach Your Bot Manners: It Is Not Negotiable to Show Respect

The majority of internet users can usually tell when their rights have been upheld or infringed upon on an instinctual level. For example, nobody wants to find out through an advertisement that their adolescent child is pregnant.

Studies have indicated that consumer choice can be influenced by a brand’s openness about data collection and use; more than one-third of consumers tend to favor brands that exhibit transparency.

Therefore, it is advised to adhere to the “rule of three” while creating responsible AI chatbots: transparency of intent, limitations, and privacy policies.

Giving consumers adequate information about who or what they are talking to is the first guideline of chatbot etiquette. Expressly identify the system as an automated service or artificial intelligence (AI) and outline the limitations of its assistance.

Bot dependability is the subject of the second rule. Users ought to be conscious of the percentages they are dealing with, since it is assumed that at least 3% of every chatbot’s output is fantastical. Microsoft suggests disseminating generic performance statistics summaries as well as performance disclaimers for particular scenarios or settings.

Transparency in data collection and use is the subject of the third rule. Users should be able to meaningfully consent to terms and conditions in a trustworthy customer relationship, as opposed to taking them at face value.

What’s the practical meaning of this? Although the duration of user inputs is not specified in ChatGPT’s privacy policy, Claude’s policy makes it plain that data is automatically erased after 30 days. Just like that.

 

 

  1. Submit Your Bot To Detailed Success Metrics And Early Performance Reviews

The annoying issue with AI’s broad use is that it seems to be all things to all people, which makes benchmarking more difficult.

Dr. Catherine Breslin, an AI consultant and former Alexa developer, emphasizes the importance of continuous and thorough testing, both prior to and following chatbot implementation. Given that data is the foundation of any AI application, it is important to educate bots to distinguish between legitimate and harmful data. Additionally, addressing bias calls for a variety of datasets with clear fairness parameters.

In layman’s words, fine-tuning a crucial tool in the AI risk management toolbox is like giving AI a crash course on a particular subject it needs to learn. Pedro Henriques, the founder of AI for media start-up The Newsroom and a former data science team head at LinkedIn, suggests that in order to guarantee responsible activity, AI models should be adjusted to the unique use cases, linguistic styles, and requirements of the chatbot.

Prompt engineering, the process of creating and perfecting prompts to elicit certain responses from an AI model, should also be used to fine-tune a responsible chatbot. An HR assistant bot, for example, ought to be able to cite the business’s non-discrimination policies to justify its decision to choose one applicant over another.

Chatbots should also be designed with explainability in mind to foster transparency and user trust.

Chances are, it’ll take a team effort to get the bot tested and monitored smoothly. Involving multiple IT teams early ensures smoother testing, more seamless integration and scalability for larger user bases.

 

  1. Ensure That Your Bot Passes Safety Training

Much like new interns are expected to complete health and safety training on day one, chatbots must meet key safety standards. Jonny Pelter, former CISO of Thames Water and now founding partner of CyPro, warns the stakes are high for securing chatbot infrastructure.

Beyond standard security measures like incident response and penetration testing, chatbots need a full Secure Software Development Lifecycle throughout their development.

With AI-driven threats on the rise, once-optional controls like adversarial testing, data poisoning defenses, functional transparency, AI security monitoring and model inversion-attack prevention are now crucial, warns Pelter.

Thanks to regulations like the EU AI Act and U.S President Joe Biden’s executive order, some of these practices are now gaining ground, says Carlos Ferrandis, co-founder of Alinia AI, an AI safety and control platform.

 

 

  1. Keep Your Bot Coloring Inside The Legal Lines

Determining accountability is the main issue facing responsible AI. Risk owners in legal, privacy, or security departments can find a lifeline in the more than 40 AI governance frameworks available, each targeted to a distinct audience.

The strictest regulations, such the General Data Protection Regulation and the EU’s AI Act, place legal requirements on AI systems that are used in Europe. Global, but non-binding frameworks that promote openness, accountability, and fairness include the ISO/IEC 23894 and the National Institute of Standards and Technology in the United States.

Extra barriers are required in some areas. When it comes to handling malicious bots, for example, the chatbot standards for banking established by the Institute of Electrical and Electronics Engineers provide minimal space for error.

  1. Instill The Right Values In Your Bot And Ensure It Sees The Bigger Picture

Far more than just technical expertise and security knowledge, we expect our colleagues whether interns or leaders  to uphold ethical standards such as respect for customers, environmental care and integrity.

When it comes to chatbots, we can’t demand integrity or take the lying bot to the moral court, so we assign responsibility for their actions to a “human in the loop” and provide clear reporting channels, as Dr. Breslin suggests.

Meanwhile, the environmental impact of large-scale AI chatbots is a growing concern, with no immediate fixes.

Dr. Nataliya Tkachenko, research fellow at Cambridge Judge Business School, highlights that every chatbot interaction consumes computational resources, especially in real-time applications like customer service, amplifying the issue further.

Ultimately, organizations bet on young professionals to foster a more responsible workplace over time. The same expectations could reasonably apply to AI bots and assistants. However, if there is one thing last year taught us, it is that the swift and widespread impact of AI means that rogue chatbots could escalate risks well beyond the scope of standard disciplinary procedures.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading