By: Sivarama Krishnan, Leader – Cyber Security, PwC India
I. Introduction
In the Internet age, the amount of information which organisations control is increasing exponentially. So much so that organisations don’t know what to do with all the high-volume, high-velocity and high-variety data in their possession. This torrential flow of data has prompted organisations to use creative ways to recognise, understand and analyse huge volumes of personal data with the help of techniques such as big data analytics, artificial intelligence and machine learning to generate insights. Most of these techniques deal with personal information of individuals and are used for a high degree of automatic profiling.
Automated profiling refers to the use of individuals’ personal information in order to build up a picture of the type of person they are and the way they behave, either for analytics reporting or for the purpose of targeting. This may include analysing online behavioural patterns for targeted marketing/advertisements, analysing credit history to create a credit profile of individuals and analysing qualifications and online presence to assess a candidate’s skill set.
Organisations may use the results of such profiling to manually make decisions or implement solutions to automate such decisions. Automated decision making may include automatic targeted marketing of products to specific groups of people, automatically accepting/rejecting an individual’s loan request based on his credit profile, and automatically considering/rejecting a candidate’s profile based on his skill set and qualifications.
The General Data Protection Regulation (GDPR) terms this as ‘automated decision making’ and places restrictions where the automated decision may have ‘legal’ or ‘similarly significant’ effects on individuals, such as possible impingements on the freedom to associate with others, vote in an election, or take legal action, or an effect on legal contractual status or rights. ‘Similarly significant’ effects may have the potential to influence the circumstances, behaviour or choices of the individuals concerned.
II. Key points to consider during automated profiling and decision making
Recent litigations and privacy developments demand a high degree of fairness and transparency while processing personal data, especially if such processing involves automated profiling and decision making.
With an increasing number of organisations relying on the use of big data analytics, artificial intelligence and machine learning to make automated decisions, there is a need to take a more risk-based approach and tread cautiously since such activities involve the use of personal information. Listed below are the key points for organisations to consider while carrying out automated profiling and decision making:
i. Gain visibility into how personal data is used for automated decision making
A key step in the GDPR compliance journey is for organisations to have a clear and accurate understanding of personal data and maintain records of the end-to-end lifecycle of this data. For solutions involving automated profiling and decision making using personal data, organisations should:
- Understand the categories of personal data and data subjects involved in processing.
- Understand and document the logic proposed for automated profiling or decision making.
- Understand the significance and business objective of such processing.
- Gauge the potential impact and consequences to an individual by highlighting the legal implications when applicable.
- Assign key recipients of the profiling activity.
ii. Assess the privacy risks involved
Organisations should consider performing data protection impact assessments (DPIAs) before conducting any automated profiling and partially automated and/or fully automated decision-making activities.
- The assessment must aim at breaking down and examining the risks to data subjects during all phases of the activity (collection, use, share, storage and deletion).
- The assessment should examine the fairness of the profiling made and ask questions such as ‘does the profiling involve gender bias or racial discrimination?’
- The assessment should examine the fairness of the decisions made and ask questions such as ‘does the decision have adverse impacts on individuals, such as denying people access to employment opportunities, targeting them with excessively risky or costly financial products or inadvertently influencing their decisions?’
- Additionally, organisations should involve a data protection officer (DPO) and also consider external counsel advice to determine the legality of conducting such processing activities. Further, they should identify risk mitigation strategies.
- If the processing activity is deemed to be high risk, even after applying risk mitigation strategies, organisations should consult with the respective data protection authorities (DPAs).
iii. Identify the legal basis for lawful processing
Organisations should identify the legal basis for justifying data processing activity. The GDPR lists the following legal basis for fully automated decision making and profiling that has a legal or similarly significant effect:
- It is necessary for the performance of a contract, or for entering into one. Necessity must be interpreted by considering both the legitimate interests of the controller and the impact on an individual along a spectrum.
- It is authorised by the union or member state law to which the data controller is subject.
- It is based on an individual’s explicit consent. Though explicit consent has not been defined clearly in the GDPR, it states that individuals must specifically give their consent through an express statement.
For profiling that does not apply to automated decision making, the GDPR provides a more extensive list of legal basis which can be used to justify processing activities performed:
- Based on an individual’s consent. Consent must be freely given, unambiguous, specific and informed.
- Necessary for the performance of a contract.
- Necessary for compliance with a legal obligation.
- Necessary to protect the vital interests of an individual.
- Necessary for the performance of a task carried out in public interest or exercise of official authority. For example, taxation; reporting crimes; humanitarian purposes; preventive or occupational medicine; public health; social care; quality and safety of products, devices and services; and election campaigns.
- Necessary for the legitimate interests pursued by the controller or third party. The legitimate interest of the organisation must be balanced with the risk to data subjects, after considering the security measures implemented.
iv. Incorporate ‘privacy by design’ principles from the ground up
- Organisations should consider building ‘privacy principles’ into the foundation of the solutions involving automated profiling and decision making. The GDPR defines six privacy principles: lawfulness, fairness and transparency; purpose limitations; data minimisation; accuracy; storage limitations; integrity and confidentiality.
- Ensure that adequate risk mitigation strategies, as identified during the DPIA, are embedded into the solution. Organisations are recommended to involve the DPO during key checkpoints of the solution design and deployment to weigh in on the privacy strategies and measures to be embedded into the solution.
- Organisations must consider, at every opportunity, minimising the collection of personal data and apply measures to maintain data quality and accuracy during the implementation of the automated profiling and decision-making solution.
- Use privacy enhancing methods like aggregation, anonymisation, pseudonymisation and encryption to minimise the amount of personal data and threats involved in automated profiling and decision making, thereby minimising privacy risks to individuals.
v. Identify data subject (consumer/customer) rights
The Article 29 Data Protection Working Party (WP29) suggests that ‘it is good practice’ to provide information on the rights of data subjects, regardless of whether or not the processing falls within Article 22(1) of the definition of automated decision making. The working party requires that controllers:
- Provide meaningful information and use simplified ways to explain to data subjects the rationale behind or the criteria relied on in reaching the decision.
- Notify the data subjects about the significance of processing, which should be followed up with real, tangible examples of the type of possible effects.
- Provide data subjects the right to human intervention, to help explain the logic, significance and potential consequences relating to the automated decision.
- Provide easily accessible mechanisms for data subjects to express their views and take decisions.
vi. Manage third parties carrying out automated profiling and decision making
- Carry out detailed due diligence of third-party vendors to understand their business objectives, affiliations to external organisations (including political affiliations), what/how/why they intend to carry out automated profiling and decision making, etc.
- Carry out a DPIA to clearly define the extent of profiling and decision making that should be performed by third parties, clearly document such obligations within the contract and also agree on the technical and administrative measures that should be implemented to secure personal data and to limit the scope and extent of automated profiling and decision making.
- Engage or mandate independent audits of the processing activities to ensure that third-party vendors are complying with contractual obligations.
vi. Best practice suggestions
In line with the above, the guidelines[1] published by WP29 around automated profiling and decision making provide some best practice suggestions for organisations:
- Carry out regular quality assurance checks of systems to make sure that individuals are being treated fairly and are not discriminated against.
- Test the algorithm/logic utilised for automated profiling and decision making to ensure the system performs as intended and does not produce any discriminatory, erroneous or unjustified results.
- Utilise third-party auditing of such systems (where decision making based on profiling has a high impact on individuals) to receive an independent view on the processing and potential consequences of automated profiling and decision-making activities on data subjects.
- Obtain contractual assurances for third-party algorithms that auditing and testing has been carried out and that the algorithm is compliant with the agreed standards.
- Implement technical and administrative measures for data minimisation to incorporate clear retention periods for profiles and for any personal data used when creating or applying the profiles.
- Apply anonymisation or pseudonymisation techniques (wherever possible) in the context of profiling.
- Provide data subjects with greater transparency and control over the processing activities involved.
- Liaise with ethical review boards and external counsel to assess the potential harms and benefits to data subjects of particular applications of profiling.
- Develop codes of conduct for auditing processes involving machine learning.
-
III. Conclusion
In the current digital era, artificial intelligence, machine learning and digitisation play a key role in economic development and advancement. While with such technologies, automated profiling and decision making are inevitable, it is important to recognise and ensure that their use is always aimed at the welfare and benefit of humans.
[1] Guidelines on automated individual decision making and profiling for the purposes of regulation 2016/679. Retrieved from http://ec.europa.eu/newsroom/document.cfm?doc_id=47742
Comment
Some points to consider in totality :
Automated decision making and profiling are two separate, but often interlinked concepts.
Profiling and automated decision making can be used in three ways:
General prohibition on certain types of automated decision making
Under Article 22(1) of the GDPR, decisions based solely on automated decision making which produces legal effects or similarly significantly affects an individual are prohibited unless:
Automated decision making that involves special categories of personal data, such as information about health, sexuality, and religious beliefs, is only permitted where it is carried out on the basis of explicit consent or where it is necessary for reasons of substantial public interest, such as fraud prevention and operating an insurance business.
Necessity is interpreted narrowly, and organisations must be able to show that it is not possible to use less intrusive means to achieve the same goal.
Further regulatory guidance on what constitutes “explicit” consent is expected in due course. As with general consent under the GDPR, any consent must be freely given, unambiguous, specific and informed.
The Article 29 Working Party’s draft guidance on profiling can be downloaded from the Article 29 Working Party website.