Written by Meeri Haataja, CEO of Saidot
The European Commission has estimated that the coming AI Act will impact around 5-15% of all AI systems deployed in Europe. As the regulation is coming close to its final approval, it's time to ask: Who are the ones most impacted?
The act will be disruptive for teams and organizations in human resources and education: service providers in recruiting, candidate evaluation, workforce management, schools and educational institutions at all levels. Companies' HR teams deploying HR technologies. Even job advertisers. Regulative risks are becoming more central to their operations, and we can anticipate growing investments in compliance from the companies operating in these fields.
Governments will be another one of the most disrupted sectors. Public authorities using AI systems for law enforcement, migration, asylum and border control, administration of justice, health, education, traffic, or allocation of public benefits and services – will be highly impacted. For them, preparations will probably be the most challenging, considering the scale of AI activities that fall under the regulation. I anticipate that many Member States will use the opportunity not to impose administrative fines on public authorities for the same reasons.
Companies operating in regulated industries, financial services (banking, health and life insurance), health, and transport will also be impacted. For them, building preparedness will require work but may be less disruptive due to the typically more established risk, quality, and compliance processes. The focus will be on adapting those for new AI-specific requirements.
AI product companies selling to any of the above sectors will also be highly impacted. AI product companies building high-risk AI products will need to establish ways to ensure compliance throughout the product lifecycles. AI providers should also expect growing ethical requirements from their buyers while enterprises establish compliance processes across all AI they use.
Technology services companies will be indirectly impacted by growing enterprise procurement requirements and AI supply chain controls. They will also have a role in supporting directly impacted customers in establishing proper governance practices.
Tech companies and anyone providing general-purpose AI systems will need to consider the potential uses of such systems for high-risk purposes. If allowing the use for high-risk purposes, the providers must establish their compliance and ways to support the users' compliance via transparency and potential other means.
Finally, providers and users of AI systems that interact with people need to ensure that users are informed about interacting with AI. Providers and users of AI systems for biometric categorisation, emotion recognition, or image/content creation and manipulation (deep fakes) must give specific notifications of such dataprocessing during the AI interaction. We're likely to see a lot of new transparency and explainability features come our way while using AI services going forward.
While it is still too early to validate the exact scale of the impact, what's notable is that via regulating the use of AI in human resources, and creditworthiness assessments, the regulation will not only impact specific industry verticals, but the related enterprise processes across all sectors. This is why I anticipate a broad impact on all B2C industries via the codes of conducts and standard-setting mechanisms. What do you think? Do you work in one of the above mentioned sectors, or some other industry that you're already seeing AI Act having an impact?