My page - topic 1, topic 2, topic 3 Postbox Live

EU AI Act Becomes Operational, Industry Responds

Eu Ai Act

EU AI Act Becomes Operational, Industry Responds

Businesses who fail to comply with the world’s first comprehensive AI law, which is designed to safeguard individuals’ rights, risk severe penalties.

the EU AI Act, the first comprehensive law in the world controlling AI deployments, will go into effect.

The legislation, which was the result of three years of intense political fighting and protracted debates, attempts to defend the rule of law and safeguard citizens’ rights from the dangers posed by high-risk artificial intelligence technologies.

The act classifies AI applications utilized throughout the bloc according to their risk level; those that pose a threat to citizens’ rights are immediately prohibited. Users of high-risk AI systems are required to maintain comprehensive logs of AI usage, undertake in-depth risk assessments, and guarantee human monitoring.

Companies that use AI systems in violation of the legislation risk steep fines, which can reach $38 million, or 7% of turnover, and $8 million, or 1.5% of worldwide annual revenue.

Transparency obligations increasingly bind foundation models, like as OpenAI’s GPT-4, requiring developers to reveal details about the underlying data prior to public distribution. The rules also apply to general-purpose AI systems, mandating that their creators provide public summaries of the training data they utilized.

This historic legislation will influence SMEs, AI training efforts, and cross-border deployments, according to industry professionals across the AI landscape who have offered their perspectives. This represents a significant milestone in AI governance.

Encouragement is provided by clear guidance.

Paul Cardno, senior manager of worldwide digital automation and innovation at 3M

Businesses have been waiting a long time for the EU’s AI Act to be introduced, since nearly 80% of citizens in the UK now think AI has to be strictly regulated. We are aware that artificial intelligence is reshaping the world, but businesses won’t be able to benefit from it until they have the courage to challenge established norms and reevaluate current procedures. Like any new technology, if AI is misused, it might potentially lead to more issues occurring more quickly way.

While the EU Act isn’t perfect and needs to be assessed in relation to other global regulations, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organizations operating across the EU, which can only be a promising step forwards for the industry.

AI Act Bears Resemblance to Cybersecurity Legislation

Pieter Arntz, senior threat researcher at Malwarebytes

The EU’s cybersecurity legislation, NIS2, comes to mind when I look at the EU AI Act since rules are not keeping up with technological advancements. While the law offers some guidance, its main focus is on categorizing AI models according to the level of danger they represent. This implies that many of the laws will call for definitions of concepts that the judicial system is not very familiar with. Systems that pose a risk to human safety, for instance, will be prohibited. This may seem like a straightforward rule to abide by, but examples that discuss privacy, discrimination, and the use of biometrics quickly obscure it because there are several instances in which law enforcement is granted an exemption in these areas.

A large number of the rules are derived from antiquated product safety regulations, which are challenging to adapt into new regulations for rapidly changing fields. While a screwdriver takes time to become a chainsaw, a chatbot powered by artificial intelligence can become an irascible bigot in a matter of hours. It is difficult, therefore, to judge a book by its cover or, in this instance, even by its early editions. And that’s OK when discussing AI models created with a single objective in mind. Not to mention the open-source models that users can modify to suit their needs, the far more versatile Large Language Models (LLMs) are far more difficult to categorize.

Though it gives law enforcement some tools to keep it under control, I think it’s good that legislators have given the matter some thought. However, it will always be subject to changes that occur with new trends and features that become accessible.

An All-Inclusive Approach to Regulation Risks That Is Too Strict

Eleanor Lightbody, Luminance’s CEO

A landmark piece of legislation for responsible AI regulation, the EU AI Act strikes a balance between prohibiting the use of harmful AI and outlining precise standards for permitted applications. It’s true that there is a wide range of AI technologies and applications for big language models, ranging from highly specialized domain-specific AI to generic chatbots. A one-size-fits-all approach to AI legislation runs the risk of being inflexible and rapidly out of date given the speed at which AI is developing.

With the passing of the act, all eyes are now on the new Labor government to signpost the U.K.’s intentions for regulation in this crucial sector. Implementing a flexible, adaptive regulatory system will be key and this involves close collaboration with leading AI companies of all sizes. Only by striking the right balance between innovation, regulation and collaboration can the U.K. maintain its long heritage of technological brilliance and achieve the type of AI-driven growth that the Labor Party is promising.

Impact on UK Businesses

Curtis Wilson, staff data engineer at the Synopsys Software Integrity Group

Similar to the General Data Protection Regulation (GDPR), any U.K. business that sells into the EU market will need to concern themselves with the EU AI Act. However, even those that don’t can’t ignore it. Certain parts of the AI Act, particularly those concerning AI as a safety component in consumer goods, might also apply in Northern Ireland automatically as a consequence of the Windsor Framework. The U.K. government is moving to regulate AI as well and a whitepaper released by the government last month highlighted the importance of interoperability with EU and U.S. AI regulation.

 U.K. companies aligning themselves to the EU AI Act will not only maintain access to the EU market but hopefully get ahead of the curve for the upcoming U.K. regulation.

Businesses in the UK are accustomed to navigating EU regulatory frameworks, from software licensing to data privacy laws. Many of the responsibilities outlined in the act are just best practices for data science that businesses ought to be carrying out anyway. There are certain additional requirements related to certification and registration, which is likely to cause conflict. The rule recognizes that small businesses and startups would face more challenges, and it has incorporated provisions for sandboxes to support AI innovation for these smaller enterprises. Nevertheless, U.K. enterprises might not have access to these sandboxes because they are to be established at the national level by individual member states.

Ideal for New Businesses

CEO of Speech Graphics and Rapport, Gregor Hofer

Unlike its detractors, we regard the EU AI Act as legislation that may provide responsible AI enterprises with a competitive edge rather than red tape. Businesses who follow these guidelines will be in a better position to expand internationally. We see an opportunity for the EU, as it has done with GDPR rules, to establish a gold standard that will probably impact laws around the globe.

Specifically, for startups and SMEs like us, the tiered risk approach means we can innovate rapidly in lower-risk areas while focusing our compliance efforts where it matters most. The proposed AI regulatory sandboxes are particularly promising, offering a safe space to test cutting-edge applications.

Impact on the Wider Tech Community

Denas Grybauskas, head of legal at Oxylabs

As the AI Act comes into force, the main business challenges will be uncertainty in its first years. Various institutions, including the U.K. Office for Artificial Intelligence, courts and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law. During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures they put in place are solid enough.

One business compliance risk that is not being discussed lies in the fact that the AI Act will affect not only firms that directly deal with AI technologies but the wider tech community as well. Currently, the AI Act lays down explicit requirements and limitations that target providers, i.e., developers, deployers, i.e., users, importers and distributors of AI systems and applications. However, some of these provisions might also bring indirect liability to the third parties participating in the AI supply chain, such as data collection companies.

Balancing Efficiency with Essential Safeguards

David Evans, vice president of product management at GoTo

AI has huge potential to improve the way people live and work. In the IT and customer experience sectors, it is automating work, driving deep insights and augmenting and assisting workers to free up their time to solve complex issues and create real human connections. In contact center products, it is particularly useful in optimizing the customer journey; allowing customers to self-serve and enabling new levels of transparency in the customer experience, creating loyal customers and happy employees.

As noted by EU authorities, this law is not the end of AI innovation, but rather the start of a journey towards better governance that will balance efficiency with essential safeguards. Strict regulation based on risk levels will only improve AI’s positive use cases – across the IT sector and beyond – trust is critical to the success of AI and safeguards like this are an essential way to deliver the benefits while minimizing potential threats as they evolve.

AI Act Positive for Consumers

Rodney Perry, head of data and analytics at Making Science

Businesses deploying AI tools for advertising must reassess their AI practices to ensure compliance. The act’s stringent prohibitions on biometric categorization, facial recognition databases and social scoring will require advertisers to shift towards more ethical targeting and personalization strategies. The transparency requirements, particularly regarding the publication of training data, could also affect how machine learning models are trained and utilized.

Consumers will feel the positive impact of these changes, benefiting from increased protections against manipulative AI practices, which will enhance trust in AI-driven services. Businesses must prepare now to meet these new standards. Leveraging a partner with expertise in aligned technology will not only enable businesses to comply with regulations but also foster a trustworthy digital ecosystem that supports innovation and success.

Regulation is Here but Must Still Evolve

Julian Mulhare, managing director for EMEA at Searce

Businesses need to understand their new obligations to remain compliant and avoid crippling fines. Compliance with copyright laws and transparency is crucial for both general-purpose AI systems, like chatbots and generative AI models. Detailed technical documentation and clear summaries of training data, especially for generative AI models, will be necessary.

To remain agile, companies need modular AI processes for easy updates, avoiding a complete overhaul. A dedicated team and budget for AI maintenance are essential here. As AI becomes increasingly integrated, it will impact all business areas. Investing in compliance infrastructure, enhancing documentation and transparency and instilling robust cybersecurity measures will be imperative to mitigate financial risks and align with regulatory standards. Now, for the U.K. and Europe, this is the only way businesses can continue to leverage the benefits of AI while ensuring ethical standards are met.

Given the pessimism around Europe’s AI regulatory measures, regulators must strive to continuously evolve and collaborate with tech experts to ensure safe, equitable and innovative AI deployment so that the EU doesn’t fall behind.

Collaboration is a Key Consideration

Christoph Kruse, marketing director at Mint

As AI will start to impact many different areas of our lives, it’s important to set the right guardrails and regulations so that this very powerful technology isn’t exploited. We are only at the beginning of a fast-paced economic and societal revolution and we can already sense the rapid development that is about to accelerate even further. Civil actors like the legislative bodies need to show that they understand the impact it will have and make sure AI will be safe whilst supporting the enormous business potential this technology holds. Therefore, the gradual approach that the EU AI Act is taking is also the right one to make sure that innovations can thrive in the European Union.

In the marketing arena, decision-makers are shifting their AI use focus from creative tasks to the optimization of processes and supporting the management of all aspects of the advertising workflow. The companies that deploy the right strategy and foster a collaborative environment in which humans utilize AI to save time, reduce manual mistakes but also open up new possibilities that were not possible in the pre-AI age, are the ones that will thrive.

Transparency Requirements Meet Data Training

Sebastian Gierlinger, vice president of engineering at Storyblok

The EU AI Act includes a transparency requirement for “publishing summaries of copyrighted data used in training.” Some get-outs allow for data mining of copyrighted works in instances such as for use by research institutions. This is not considered a viable defense for AI companies with public and commercial generative AI systems. But while big tech puts pressure on governments to hold off on legislation, AI systems continue to train on copyrighted content.

Many AI companies have assumed that they’re allowed to use whatever content they want from the web and have hit out against governing policy as detrimental to growth and innovation. Mustafa Suleyman, CEO of Microsoft AI, said as much in an interview with CNBC. To back this up, last month the Chamber of Progress, a tech industry coalition whose members include Apple, Meta and Amazon, launched a campaign to defend the fair use of copyrighted works to train AI systems.

The AI Act will introduce limited exceptions for text and data mining and recognize the importance of balancing copyright protection with promoting research and innovation. It acknowledges the need for proportionality in compliance requirements for startups and SMEs.

With the implementation of the AI Act, companies must develop a comprehensive AI policy that serves as a framework for responsible and transparent AI deployment. It is important to have an AI policy that ensures that the technology is used ethically, legally and effectively.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading