There are many examples of AI being used in everyday life: Online shopping and advertising; Web search; Digital personal assistants; Machine translations; Smart homes, cities and infrastructure; Cars; Cybersecurity; Artificial intelligence against Covid-19; Fighting disinformation, etc.
To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems in May 2023 during a committee vote.
The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics.
Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.