Blogs      Artificial Intelligence      AI adoption framework: technology/privacy/ security/data point of view and EU AI act

AI adoption framework: technology/privacy/ security/data point of view and EU AI act

Artificial IntelligenceCyber SecurityPrivacy

Complimentary Consultation

We will explore how you can optimise your digital solutions and software development needs.

Share

Gartner highlights that up to 37% of companies have already adopted and implemented some form of Artificial Intelligence (AI) into their processes. However, as per the McKinsey survey, only about 20% of companies take advantage of strategic AI capabilities and are not able to exploit the AI to the full extent.

AI adoption in the context of technology, privacy, security, and data is a multifaceted issue that implies far-reaching change for various industries. We would like to discuss the importance of the AI adoption framework as well as provide those interested in global AI regulation or unfamiliar with the EU AI Act with an explanation of its significance, scope, and key points.

Data and Artificial Intelligence

Speaking about the AI adoption in data management, it is important to understand that data needs have also evolved. Now, data should be accessible, well-stored, and appropriately categorized. Effective data management requires the implementation of strong data governance practices within an adequate infrastructure.

The organization has to establish data governance policies and practices to maintain data quality and integrity, as well as implement measures to detect and mitigate bias in AI algorithms. Ultimately, a robust data governance strategy is foremost to maximize the value of data. This encompasses data availability, accountability, usability, consistency, and security, which entails establishing processes, procedures, and policies that guarantee efficient data management.

The General Data Protection Regulation (GDPR) and Artificial Intelligence

The EU GDPR determines how the personal data of individuals in the EU may be processed and transferred. Also, the GDPR defines individuals’ fundamental rights in the digital age, the obligations of those processing data, methods for ensuring compliance, and sanctions for those in breach of the rules.

The information requirements determined by the GDPR can be met regarding AI-based processing, even though the complexity of AI applications has to be considered. Data protection authorities should actively engage with all stakeholders, including controllers, processors, and society, to develop effective responses based on shared values and viable technologies.

By consistently applying data protection principles and harnessing AI technology effectively, trust can be built, and risks can be mitigated, contributing to the success of AI-powered applications. Organizations should consistently review and update their data protection practices, adapting to changing regulations and emerging privacy risks while navigating through the GDPR AI-related requirements.

Technology in AI adoption framework

Data availability and quality are key technical challenges for AI adoption and scaling. Thus, organizations have to secure access to sufficient and diverse data sources to store, process, and analyze data efficiently and protect them from breaches or misuse. Organizations also need to invest in the proper infrastructure, tools, and platforms to support their AI development, deployment, and maintenance and scale up or down as required.

Before integrating AI, the organization should assess its infrastructure needs, including computing resources and data storage. It is also reasonable to define methodologies for building, training, and deploying AI models.

It is also important to guarantee that AI-powered tools can easily integrate into your current systems and workflows. A smooth integration reduces disruption and amplifies the effectiveness of adopting AI, optimizing its efficiency.

Privacy, security, and AI adoption

Although AI may raise questions about traditional privacy concepts, it’s important to note that AI doesn’t innately have to corrupt privacy by default. AI could contribute to enhancing privacy in the future. Striking a balance between technological progress and AI privacy issues will foster the growth of socially responsible AI, ultimately contributing to the creation of lasting public value.

As for now, it is necessary to implement robust data security measures like encryption, access controls, and regular data audits. Additionally, organizations can leverage technologies like decentralized, federated learning, where data remains on users’ devices, in order to address AI privacy concerns.

The organizations have to conduct a thorough assessment of data privacy risks and requirements. Then, implement data minimization practices to collect only the necessary data for AI projects, ensuring transparency in data collection.

The EU AI Act

The EU AI regulation is a legislative proposal by the European Commission to control artificial intelligence systems within the European Union. It is designed to strike a balance between fostering AI innovation and ensuring the responsible and ethical use of AI technologies.

As part of its digital strategy, the EU seeks to regulate AI usage to secure better conditions for the development of this technology. In April 2021, the European Commission issued the first EU regulatory framework for AI with AI systems used in different applications analyzed and classified according to the risk they pose to users.

With the AI Act, Europe contributes to a broader digital regulatory framework, which encompasses various facets of the digital economy, including the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act. On June 14, 2023, Members of the European Parliament adopted the negotiating position on the AI Act. The talks enter the final stages of negotiations between the EU’s co-legislators with EU countries in the Council. The AI Act will probably be adopted in early 2024, with enactment of at least 18 months before the regulation becomes fully enforced.

EU AI Act overview

The proposed AI Act covers AI systems that are “placed on the market, put into service or used in the EU.” So, it also applies to global vendors selling or otherwise making their systems or services available to users located in the EU.

Exceptions from the list:

  • AI-powered systems developed or used exceptionally for national security purposes, pending negotiations
  • AI systems and tools that are designed and used for scientific research
  • Free & open-source AI systems, with the exception of foundation models, are discussed.

The risk-based approach

The proposal outlines a categorization system, which entails the regulation of AI systems according to the extent of risk they present to an individual’s health, safety, and fundamental rights. These risks are categorized into four levels:

  • unacceptable
  • high
  • limited
  • minimal/none

 

The AI Act places the most significant emphasis on regulating AI systems falling within unacceptable and high-risk categories, making them the primary focus of the discussion. The precise classification of different types of AI systems is yet to be determined and is anticipated to be a contentious subject during the trilogue negotiations.

Unacceptable risk systems to be prohibited

AI systems that relate to unacceptable risk categories to be prohibited. In accordance with the consensus reached, such unacceptable risk systems encompass those with potential for manipulation, whether through subconscious messaging or by exploiting vulnerabilities related to factors such as socioeconomic status, disability, or age.

Additionally, AI systems designed for social scoring, which involve assessing and treating individuals based on their social behavior, are also banned. Moreover, the European Parliament aims to prohibit the use of real-time remote biometric identification in public spaces, including live facial recognition systems and other biometric applications in law enforcement contexts.

High-risk systems will be carefully regulated

High-risk AI systems fall into one of two categories:

  1. The system falls under the category of a safety component or product subject to established safety standards and evaluations, similar to items like toys or medical devices.
  2. The system serves a particular sensitive purpose. While the specific list of these use cases may change during the negotiations, they are generally categorized within the following broad areas:
    • Biometrics
    • Education
    • Employment, workers management, and access to self-employment
    • Law enforcement
    • Migration, asylum, and border control management
    • Critical infrastructure
    • Administration of justice and democratic processes

 

While the Council’s proposal introduces additional exemptions for law enforcement purposes, the European Parliament, on the other hand, suggests a more comprehensive set of high-risk use cases. For instance, it includes content recommendation systems employed by major online platforms, such as social media algorithms and AI systems utilized for the detection and identification of migrants. Further guidelines regarding the criteria for determining whether a system meets this threshold or not would be issued following the regulation’s adoption.

Requirements for high-risk AI systems

According to the proposals, the developers of high-risk systems are obligated to fulfill multiple criteria aimed at ensuring that their technology and its application do not pose a substantial AI risk to health, safety, and fundamental rights. These requirements encompass a comprehensive array of practices related to risk management, data governance, monitoring, and record-keeping.

Additionally, they require the provision of detailed documentation, adherence to transparency, human oversight obligations, and compliance with standards pertaining to accuracy and cybersecurity. Furthermore, high-risk AI systems are required to be registered in a publicly accessible EU-wide database.

Misclassifying an AI system or failing to adhere to the relevant provisions can result in a penalty of a minimum of 20 million Euros or 4% of the global turnover, whichever amount is greater (please note that these figures are subject to potential modifications during the trialogue negotiations).

After developers have certified conformity, deployers are obligated to adhere to monitoring and record-keeping practices, along with fulfilling human oversight and transparency requirements when implementing a high-risk AI system.

Furthermore, the Parliament advocates for deployers to conduct a fundamental rights impact assessment, acknowledging that risks associated with AI systems depend on the specific context of their use. For instance, a signature verification system for class attendance management and one used to verify mail-in ballots in an election have significantly different implications. Even if an AI system functions effectively and is technically safe, it may not be suitable for use in certain circumstances.

In summary, the EU AI Act is likely to impact AI adoption in Europe by introducing a regulatory framework that balances innovation with AI risk management guidelines. While it may pose challenges for high-risk AI systems’ development and deployment, it also encourages ethical AI practices. It provides legal clarity, which could foster trust and promote AI adoption, particularly in non-high-risk areas.

Case studies

Numerous organizations have successfully adopted AI while carefully considering technology, privacy, security, and data aspects. Here are some real-world examples.

Google’s DeepMind in Healthcare

Google developed an AI system called “DeepMind Health” to assist healthcare professionals in diagnosing and treating patients. They partnered with the Moorfields Eye Hospital to create an AI algorithm that detects eye diseases from medical images. DeepMind ensured privacy by anonymizing patient data and securing it with strict access controls. The system showed promise in improving disease diagnosis and treatment planning.

IBM Watson for Oncology

IBM Watson for Oncology is an AI-powered system that assists oncologists in providing personalized cancer treatment recommendations. It analyzes vast medical literature and patient data to suggest treatment options. Privacy and security are maintained through rigorous data encryption and adherence to healthcare data regulations. It has been adopted in healthcare institutions worldwide to improve cancer care.

Netflix’s Recommendation System

Netflix uses AI to power its recommendation system, which suggests movies and TV shows to users. It analyzes user viewing history, preferences, and behavior while respecting user privacy. Netflix maintains robust data security measures to protect user data. This AI-driven recommendation system has significantly contributed to user engagement and retention.

Tesla’s Autopilot

Tesla’s Autopilot feature utilizes AI and machine learning to enable semi-autonomous driving capabilities. While ensuring safety and security, Tesla collects data from its vehicles to improve and refine the system’s performance. User data is anonymized, and Tesla has implemented strong encryption and cybersecurity measures to protect data privacy.

Conclusions

AI adoption empowers organizations to deliver state-of-the-art products and services, streamline operations, and respond effectively to customer needs, thereby remaining sustainable. Nonetheless, it also introduces substantial responsibilities in safeguarding data privacy, ensuring cybersecurity, and adhering to stringent regulatory requirements.

The EU AI Act, a significant milestone in global AI regulation, exemplifies the importance of responsible AI adoption. It strikes a delicate balance between fostering innovation and safeguarding fundamental rights, outlining a risk-based approach that classifies AI systems based on potential harm.

It’s essential to note that the real work begins once the AI Act is adopted, which is expected to occur before June 2024. At that point, the European Union and its member states must create oversight structures and provide the necessary resources to enforce the regulations effectively. The European Commission is also expected to issue an extensive AI privacy policy and guidance on implementing the Act’s provisions.

Altamira stands at the forefront of assisting companies in tackling the full potential of AI while adhering to ethical and regulatory standards. We understand the challenges and opportunities that AI brings to businesses, so we’re here to assist every step of this transformative way. Contact us to explore how we can empower your organization with responsible and compliant AI solutions.

Leave a Comment

Why you can trust Altamira

At Altamira, trust is built on expertise. We deliver content that addresses our industry's core challenges because we understand them deeply. We aim to provide you with relevant insights and knowledge that go beyond the surface, empowering you to overcome obstacles and achieve impactful results. Apart from the insights, tips, and expert overviews, we are committed to becoming your reliable tech partner, putting transparency, IT expertise, and Agile-driven approach first.

Sign up for the latest Altamira news
Latest Articles

Looking forward to your message!

  • Our experts will get back to you within 24h for free consultation.
  • All information provided is kept confidential and under NDA.