Skip to content
Vodcast

Vincit Talks: how to take upcoming AI regulation into account now

04/17/2024

In the first season of the Vincit Talks vodcast, female leaders in the IT industry will talk about important topics affecting organizations. Listen in to get insights, inspiration, and tips for developing and renewing the IT industry.

 

How are upcoming AI regulations affecting work in the IT field? We talked with Eija Warma-Lehtinen at Castren & Snellman to find out. She runs their Data Protection and Privacy practice and co-heads their Compliance and Investigations service.


According to Eija, the first thing to keep in mind is that the EU regulations are not final yet. And it’s important to remember there is a lot of hype in this field. When it comes down to it, AI is just another technology – and we’ve seen plenty of new tech introduced before, so there’s no need to panic. 

The new regulation in a nutshell

To summarize the upcoming EU AI regulation in a simple way, it’s essentially a product safety regulation for artificial intelligence technology. Its purpose is to ensure that applications and services containing AI are safe for people to use. The regulation also helps to ensure that our rights are taken into account.

Even though we don't have the final wording yet, the definition of what’s considered artificial intelligence is quite broad. That means a variety of technical solutions will fall under that definition.

How this regulation should be taken into account

Many companies are piloting generative AI, creating their own applications, and using language models. So what can companies do now to comply? There is a four-level risk scale in the regulation, and the first thing to check is where your applications fall on that scale. 

At the very top of the pyramid are the prohibited use situations, such as creating social scoring systems for people. At the bottom of the risk scale are applications or situations that involve very low risk to people and don’t require any special measures.

So this means that companies need to really look at the two middle categories, limited risk and high risk. Limited risk systems could include for example customer service robots that don’t pose a great danger to our rights. in those situations, one just has to tell the user that they’re not talking to a human but to a robot to comply with the regulation.

The high-risk category is the most relevant for compliance for companies developing AI solutions. These can be technologies that are for example part of the job application process where discrimination can result from the use of AI. Here companies will have to expend the most effort to ensure compliance – not just informing users, but working to make sure bias is removed as much as possible from the applications.

For more information reach out to Henna Niiranen, Business Director, at henna.niiranen@vincit.com !

 

Watch the discussion below or tune in on Spotify! 🎙️