How Intel is Improving Its Methodology for Ethical AI
Since artificial intelligence has greatly increased the potential for innovation, it is now more crucial than ever to adopt responsible behaviors.
I’ve long respected Intel’s capacity to see societal shifts that could be sparked by emerging technologies.
This is why we started our responsible AI (RAI) program in 2017, even before AI became widely used. Since then, we have witnessed the tremendous advancements in a variety of industries, such as healthcare, finance, and manufacturing, brought about by artificial intelligence (AI) and, more especially, deep learning.
We’ve also witnessed how the world has transformed due to the quick development of large language models (LLMs) and easier access to generative AI applications. Even those without any AI training can now access powerful AI tools. This has improved how people work, study, and play by enabling people all over the world to discover and apply AI capabilities at scale.
Though there have been many innovative opportunities as a result, there have also been growing worries about misuse, safety, bias, and false information.
It is crucial now more than ever to use ethical AI methods for all of these reasons.
At Intel, we think that in order to guarantee that AI is developed, implemented, and used in a safe, sustainable, and morally sound manner, responsible development must serve as the cornerstone of innovation throughout the AI life cycle. Our efforts in RAI are evolving at a rapid pace in tandem with AI.
Both External and Internal Governance
Using rigorous, multidisciplinary review processes at every stage of the AI life cycle is a crucial component of our RAI strategy. Intel’s internal advisory committees examine different AI development initiatives using the following guiding principles:
• Respect human rights
• Enable human oversight
• Enable transparency and explainability
• Advance security, safety and reliability
• Design for privacy
• Promote equity and inclusion
• Protect the environment
The quick development of generative AI has brought about many changes, and we have moved along with it.
We are making a lot of effort to stay ahead of the hazards, from creating standing guidelines on safer internal deployments of LLMs to studying and creating a taxonomy of the precise ways that generative AI might mislead people in practical circumstances.
We have included “protect the environment” as a new guiding principle, in line with Intel’s larger environmental stewardship objectives, as growing worries about the environmental impact of AI have coincided with the development of generative AI. Addressing this complicated field is not simple, but ethical AI has never been about simplicity.
Even though strategies to address bias were still being developed, in 2017 we made a commitment to do so it.
Investigating and Working Together
Responsible AI is still in its infancy, despite significant advancements in the field. The complexity and capacity of the newest models need us to keep pushing the boundaries of technology. Key study themes at Intel Labs include misinformation, privacy, security, safety, human/AI collaboration, AI sustainability, explainability, and transparency.
To increase the effect of our work, we also work with academic institutions all around the world in collaboration. The Intel Center of Excellence on Responsible Human-AI Systems (RESUMAIS) was recently formed.
Four premier research institutes are collaborating on the multiyear project: Leibniz Universität Hannover, DFKI, the German Research Center for Artificial Intelligence, and the European Laboratory for Learning and Intelligent Systems (ELLIS) Alicante in Germany. With a focus on topics like justice, accountability, transparency, and human/AI collaboration, RESUMAIS seeks to promote the moral and user-centered development of AI.
In addition, we keep forming and joining a number of partnerships within the ecosystem in an effort to provide standards, benchmarks, and answers for the novel and intricate problems associated with RAI. Not only as a firm, but also as an industry, we have advanced this work through our participation in the MLCommons® AI Safety Working Group, the AI Alliance, Partnership on AI working groups, Business Roundtable on Human Rights and AI, and other multistakeholder efforts.
Inclusive AI/Bringing AI Everywhere
Intel believes that responsibly bringing “AI Everywhere” is key to the collective advancement of business and society. This belief is the foundation of Intel’s digital readiness programming, working to provide access to AI skills to everyone, regardless of location, ethnicity, gender or background.
We were proud to expand our AI for Youth and Workforce programs to include curriculum around applied ethics and environmental sustainability. Additionally, at Intel’s third-annual AI Global Impact Festival, winners’ projects went through an ethics audit inspired by Intel’s multidisciplinary process. The festival platform also featured a lesson in which more than 4,500 students earned certifications in responsible AI skills. And, for the first time, awards were given to project teams that delivered innovative accessibility solutions using AI.
Gazing Forward
We are stepping up our efforts to understand and reduce the special risks brought about by the rapid growth of generative AI and to offer cutting-edge solutions for issues of safety, security, transparency, and trust. In order to expedite the resolution of the human rights issues pertaining to global AI data enrichment workers that is, individuals who render AI datasets useful by means of labeling, cleaning, annotation, or validation we are also collaborating with our Supply Chain Responsibility group. In order to advance the global ecosystem, we’re utilizing our 20 years of experience addressing issues like forced labor and responsible sourcing, which will be necessary to address this crucial issue.
Across responsible AI, we are committed to learning about new approaches, collaborating with industry partners and continuing our work. Only then can we truly unlock the potential and benefits of AI.
Lama Nachman is an Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.